scispace - formally typeset
Search or ask a question

Showing papers on "Closed captioning published in 2001"


Patent
Sara H. Basson1, Dimitri Kanevsky1
31 Jan 2001
TL;DR: In this paper, a portable and universal closed caption receiving device (closed caption receiver) is proposed for video/audio content display systems, which allows a user to receive closed captioning services regardless of whether the content display system provides closed caption capabilities.
Abstract: Methods and apparatus for portable and universal receipt of closed captioning services by a user are provided in accordance with the invention. The invention permits a user to receive closed captioning services wherever he or she may be viewing video content with an audio component in accordance with a video/audio content display system (e.g., television set, computer monitor, movie theater), regardless of whether the content display system provides closed captioning capabilities. Also, the invention permits a user to receive closed captioning services independent of the video/audio content display system that they are using to view the video content. In one illustrative aspect of the present invention, a portable and universal closed caption receiving device (closed caption receiver) is provided for: (i) receiving a signal, which includes closed captions, from a closed caption provider while the user watches a program on a video/audio content display system; (ii) extracting the closed captions; and (iii) providing the closed captions to a head mounted display for presentation to the user so that the user may view the program and, at the same time, view the closed captions in synchronization with the video content of the program.

116 citations


Journal ArticleDOI
TL;DR: The captioned video provided significantly better comprehension of the script for students who are deaf, suggesting that visual stimuli provide essential information for viewers who is deaf, which improves comprehension of televised script.
Abstract: Recent legislation has made captioned television programs common technology; consequently, televised programs have become more accessible to a broader public. In the United States, television captions are generally in written English, yet the English-literacy rates among people who are deaf are low compared to hearing peers. This research tests the accessibility of television by assessing deaf and hearing students' comprehension of captions with and without visuals/video based on their ability to respond correctly to questions about the script and central details. Results indicate that reading grade level is highly correlated with caption comprehension test scores. Across caption conditions, comprehension test scores of students who are deaf were consistently below the scores of hearing students. The captioned video provided significantly better comprehension of the script for students who are deaf, suggesting that visual stimuli provide essential information for viewers who are deaf, which improves comprehension of televised script.

99 citations


Patent
30 Aug 2001
TL;DR: In this article, closed caption data is extracted from the television signal and then processed in a speech synthesizer to provide said words as speech in a desired language, which can be translated from a first language to a second language prior to or concurrently with conversion to speech.
Abstract: Television speech is provided in a desired language using closed caption data already present in a received television signal. The closed caption data, which is representative of words, is extracted from the television signal. The closed caption data is then processed in a speech synthesizer to provide said words as speech in a desired language. The closed caption data can be translated from a first language to a second language prior to or concurrently with conversion to speech. Alternatively, the closed caption data can be carried in various languages in the television signal, and the data in the desired language can be selected for extraction from the television signal and conversion to speech.

86 citations


Patent
28 Sep 2001
TL;DR: In this paper, a multi-lingual transcription system for processing a synchronized audio/video signal containing an auxiliary information component from an original language to a target language is provided, which filters text data from the auxiliary information components, translates the text data into the target language and displays the translated text data while simultaneously playing an audio and video component of the synchronized signal.
Abstract: A multi-lingual transcription system for processing a synchronized audio/video signal containing an auxiliary information component from an original language to a target language is provided. The system filters text data from the auxiliary information component, translates the text data into the target language and displays the translated text data while simultaneously playing an audio and video component of the synchronized signal. The system additionally provides a memory for storing a plurality of language databases which include a metaphor interpreter and thesaurus and may optionally include a parser for identifying parts of speech of the translated text. The auxiliary information component can be any language text associated with an audio/video signal, i.e., video text, text generated by speech recognition software, program transcripts, electronic program guide information, closed caption text, etc.

71 citations


Proceedings ArticleDOI
22 Aug 2001
TL;DR: A novel statistical approach is presented, called the weighted voting method, for automatic news video story categorization based on the closed captioned text.
Abstract: In this paper, we present a novel statistical approach, called the weighted voting method, for automatic news video story categorization based on the closed captioned text. News video is initially segmented into stories using the demarcations in the closed captioned text, then a set of

61 citations


Patent
Michael Kahn1
13 Nov 2001
TL;DR: In this paper, a system and associated method of converting audio data from a television signal into textual data for display as a closed caption on an display device is provided, where audio data is decoded and audio speech signals are filtered from the audio data.
Abstract: A system and associated method of converting audio data from a television signal into textual data for display as a closed caption on an display device is provided. The audio data is decoded and audio speech signals are filtered from the audio data. The audio speech signals are parsed into phonemes in accordance by a speech recognition module. The parsed phonemes are grouped into words and sentences responsive to a database of words corresponding to the grouped phonemes. The words are converted into text data which is formatted for presentation on the display device as closed captioned textual data.

54 citations


Proceedings ArticleDOI
07 Oct 2001
TL;DR: This paper describes the elements of the system and presents results from running Video Scout on real TV programs and incorporates a Bayesian framework that integrates information from the audio, visual, and transcript (closed captions) domains.
Abstract: We describe integrated multimedia processing for Video Scout, a system that segments and indexes TV programs according to their audio, visual, and transcript information. Video Scout represents a future direction for personal video recorders. In addition to using electronic program guide metadata and a user profile, Scout allows the users to request specific topics within a program. For example, users can request the video clip of the USA president speaking from a half-hour news program. Video Scout has three modules: (i) video pre-processing, (ii) segmentation and indexing, and (iii) storage and user interface. Segmentation and indexing, the core of the system, incorporates a Bayesian framework that integrates information from the audio, visual, and transcript (closed captions) domains. This framework uses three layers to process low, mid, and high-level multimedia information. The high-level layer generates semantic information about TV program topics. This paper describes the elements of the system and presents results from running Video Scout on real TV programs.

48 citations


Patent
06 Sep 2001
TL;DR: In this article, a method and an apparatus for use in connection with home television video recording, playback, and viewing involving processing an electronic signal, including audio and video information, whereby the audio information, including digital representations thereof, is analyzed and modified to compare words and phrases represented in the audio and phrases stored in electronic memory for elimination of undesirable words or phrases in audible or visible representations of the audio with options for replacing undesirable words with acceptable words.
Abstract: A method and an apparatus for use in connection with home television video recording, playback, and viewing involving processing an electronic signal, including audio and video information, whereby the audio information, including digital representations thereof, is analyzed and modified to compare words and phrases represented in the audio information with words and phrases stored in electronic memory for elimination of undesirable words or phrases in audible or visible representations of the audio with options for replacing undesirable words with acceptable words. The options include varying degrees of selectivity in specifying words as undesirable and control over substitute words which are used to replace undesirable words. The options for control of the method and apparatus for language filtering are selectable from an on-screen menu through operation of a control panel on the language filter apparatus or by use of a conventional television remote transmitter. Full capability of the method and apparatus depends only on presence of closed caption or similar digitally-encoded language information being received with a television signal but special instructions transmitted with a television signal may also be responded to for activating particular language libraries or customizing a library for the program material.

48 citations


Patent
Stephen J. Orr1
20 Aug 2001
TL;DR: In this paper, a system and method for converting text data having a Teletext format to text data with an Electronics Industries Associations-608 (EIA-608) format are illustrated.
Abstract: A system and method for converting text data having a Teletext format to text data having an Electronics Industries Associations-608 (EIA-608) format are illustrated herein. A video stream with embedded text data having a Teletext format is received by a dual mode text processing system. The dual mode text processing system, in one embodiment, extracts the text data and filters the text data to identify a desired portion using an identifier, such as a page identifier or number. The desired portion (or a copy thereof), once identified, is sent to a line break parser. The line break parser, in one embodiment, eliminates some or all of any unnecessary or unintended line breaks, as well as some or all of any extra space characters, to generate a character stream. The character stream, in one embodiment, is then converted into a EIA-608 format by a line convertor, wherein the character stream is parsed into one or more subtitle lines with a maximum character length. An end-of-line break, in one embodiment, is added to the end of each subtitle line. The output of the line convertor, in one embodiment, is buffered by a rate modulator which outputs the buffered text data at a specified rate to minimize the character transmission rate disparity between the Teletext and EIA-608 specifications. The output of the rate modulator can then be encoded into an EIA-608 format by an EIA-608 encoder. The EIA-608 encoded data can then be decoded by a closed captioning decoder and displayed as Closed Captioning text subtitles, stored in file storage, processed by a software or hardware application, and the like.

47 citations


Patent
06 Mar 2001
TL;DR: In this paper, a speech-to-text processing system coupled with a signal separation processor and a signal combination processor is proposed for providing automated captioning for video broadcasts contained in audio signals.
Abstract: System, method and computer-readable medium containing instructions for providing AV signals with open or closed captioning information. The system includes a speech-to-text processing system coupled to a signal separation processor and a signal combination processor for providing automated captioning for video broadcasts contained in AV signals. The method includes separating an audio signal from an AV signal, converting the audio signal to text data, encoding the original AV signal with the converted text data to produce a captioned AV signal and recording and displaying the captioned AV signal. The system may be mobile and portable and may be used in a classroom environment for producing recorded captioned lectures and used for broadcasting live, captioned lectures. Further, the system may automatically translate spoken words in a first language into words in a second language and include the translated words in the captioning information.

47 citations


Patent
31 Aug 2001
TL;DR: In this paper, a method and system for displaying closed captions encoded in a video program represented by an electronic signal in an interactive television system is presented, which can be used to determine if a conflict exists between the screen location of the closed caption and the screen locations of the ITV data.
Abstract: The present invention is directed toward a method and system for displaying closed captions encoded in a video program represented by an electronic signal in an interactive television system. In one aspect the present invention may be used to receive the electronic signal and ITV data and to determine if a conflict exists between the screen location of the closed caption and the screen location of the ITV data. If a conflict does exists the present invention relocates the closed captions to a screen location reserved for displaying closed captions.

Patent
Thomas Christopher Dyer1
16 May 2001
TL;DR: In this article, a system and method for displaying related components of a media stream that has been transmitted over a computer network includes at least one storage device that communicates with a television decoder and with the video display.
Abstract: A system and method for displaying related components of a media stream that has been transmitted over a computer network includes at least one storage device that communicates with a television decoder and with the video display. Information from one or more components of the media stream is extracted from the media stream and delivered to one or more storage devices. This stored component is subsequently transmitted to the video display in response to an information release signal that is embedded in the information. The invention can be used to display closed caption and other information with associated audio and video signals using an audio-visual media player.

Journal ArticleDOI
TL;DR: An evaluation of the learning performance shows that a combination of low-level color signal features outperforms several other combinations of signal features in learning character labels in an episode of the TV situation comedy, Seinfeld.
Abstract: This paper presents general purpose video analysis and annotation tools, which combine high-level and low-level information, and which learn through user interaction and feedback. The use of these tools is illustrated through the construction of two video browsers, which allow a user to fast forward (or rewind) to frames, shots, or scenes containing a particular character, characters, or other labeled content. The two browsers developed in this work are: (1) a basic video browser, which exploits relations between high-level scripting information and closed captions, and (2) an advanced video browser, which augments the basic browser with annotations gained from applying machine learning. The learner helps the system adapt to different peoples' labelings by accepting positive and negative examples of labeled content from a user, and relating these to low-level color and texture features extracted from the digitized video. This learning happens interactively, and is used to infer labels on data the user has not yet seen. The labeled data may then be browsed or retrieved from the database in real time. An evaluation of the learning performance shows that a combination of low-level color signal features outperforms several other combinations of signal features in learning character labels in an episode of the TV situation comedy, Seinfeld. We discuss several issues that arise in the combination of low-level and high-level information, and illustrate solutions to these issues within the context of browsing television sitcoms.

Patent
09 Nov 2001
TL;DR: In this article, a system and method for generating and delivering user-specific content from one or more television broadcasts based on closed captioning contents of the television broadcasts is presented, where a set of parameters are defined by a user or an administrator, such as a text transcript, a still image, an audio clip and/or a video clip from a portion of a television broadcast.
Abstract: A system and method for generating and delivering user-specific content from one or more television broadcasts based on closed captioning contents of the television broadcasts are disclosed herein. One or more sets of television content, representative of one or more television channels, multimedia channels, and the like, are received by a content distributor. The content distributor, in one embodiment, decodes the closed captioning contents of the television content. Using a set of parameters defined by a user or an administrator, the content distributor, in one embodiment, generates user-specific content, such as a text transcript, a still image, an audio clip and/or a video clip from a portion of the television broadcast. In one embodiment, the set of parameters includes one or more keywords. In this case, the content distributor searches the closed captioning content of one or more specified channels for the one or more keywords, and if found, generates user-specific content based on the location of the found keywords within the closed captioning content. In another embodiment, the set of parameters includes one or more specified channels, times and/or date combinations. When a specified date and/or time has occurred, the content distributor, in one embodiment, generates user-specific content from a portion of the television content associated with the specified channel. For example, a user could specify a television channel number, time and date of the user's favorite network television channel. At the specified time and date, the content distributor generates a text transcript (the user-specific content) from the television broadcast on the specified channel. After generating the user-specific content, the content distributor can transmit the user-specific content to the user's receiving device, such as an alphanumeric pager, a wireless phone, a handheld computing device, and the like.

Patent
Paul Thomsen1
14 Jun 2001
TL;DR: In this article, a system and method for selecting symbols on a television display was proposed, in which the television maintains closed caption information on the television display and the viewer can then select and retrieve additional information regarding the selected symbols.
Abstract: A system and method for selecting symbols on a television display. In response to a viewer request, the television maintains closed caption information on the television display. The viewer may then select and retrieve additional information regarding the selected symbols. For example, in response to selecting a word of the closed caption information, in one embodiment of the invention, the television queries an external device such as Internet search engine for additional information. The external device can comprise other types of database systems such governmental, private, educational, and commercial databases. Furthermore, in one embodiment of the invention, the television queries an internal database of the television to find the requested information.

Patent
02 Jul 2001
TL;DR: In this paper, a system and method for automatically linking closed captioning to Web sites, based on a viewer selection of a word or phrase displayed in a closed-captioning window of a TV, is presented.
Abstract: A system and method for automatically linking closed captioning to Web sites, based on a viewer selection of a word or phrase displayed in a closed captioning window of a TV. A microprocessor associated with the TV can automatically access the Web using as an entering argument the selected word or phrase from the closed captioning, so that a viewer can obtain further information regarding televised content.

Patent
12 Jul 2001
TL;DR: In this paper, a system for delivering closed caption text to attendees of movie theaters or other events, and to provide a head mounted display used for hearing impaired persons and persons who do not speak the language used in the conversation in a movie in the movie theater, further for delivering broadcast to the above people in the other events.
Abstract: PROBLEM TO BE SOLVED: To provide a system for delivering closed caption text to attendees of movie theaters or the other events, and to provide a head mounted display used for hearing impaired persons and persons who do not speak the language used in the conversation in a movie in the movie theater, further for delivering broadcast to the above people in the other events. SOLUTION: A device formed by a monocular or binocular type display suitable for the head mounted display put on the movie audience or event participants, a central router/processor, and a transmission protocol are main components. Closed caption text or representative text is projected on a display screen of the display in synchronization with the conversation in the movie or broadcast so that the wearer can see motion on a screen and completely separate from the people not using the display device. An interface for selecting language of closed caption text is provided for the people speaking a foreign language.

Patent
29 Oct 2001
TL;DR: In this article, a system and method for automatically establishing TV audio/video/closed captioning, based on time/date/geographic location/location of the TV within the home.
Abstract: A system and method for automatically establishing TV audio/video/closed captioning, based on time/date/geographic location/location of the TV within the home.

Journal ArticleDOI
S. Dutta1
TL;DR: The NX-2700 is a programmable processor with a very powerful, general-purpose very long instruction word (VLIW) central processing unit (CPU) core that implements many nontrivial multimedia algorithms, coordinates all on-chip activities, and runs a small real-time operating system.
Abstract: This paper describes the architecture, functionality, and design of NX-2700, a digital television and media processor chip from Philips Semiconductors. The NX-2700 is the second generation of an architectural family of programmable multimedia processors targeted at the digital television (DTV) markets, including the United States Advanced Television Systems Committee (ATSC) DTV-standard-based applications. The chip not only supports all of the 18 ATSC formats from standard-definition to wide-angle, high-definition video, but has also the power to handle high-definition television (HDTV) video and audio source decoding (high-level MPEG-5 AC-3 and ProLogic audio, closed captioning, etc.) as well as the flexibility to process advanced interactive services. NX-2700 is a programmable processor with a very powerful, general-purpose very long instruction word (VLIW) central processing unit (CPU) core that implements many nontrivial multimedia algorithms, coordinates all on-chip activities, and runs a small real-time operating system. The CPU core, aided by an array of peripheral devices (multimedia coprocessors and input-output units) and high-performance buses, facilitates concurrent processing of audio, video, graphics, and communication-data.

Proceedings Article
01 Jan 2001
TL;DR: A video OCR system that automatically extracts closed captions from video frames as keywords (or as they are called "cues") for building annotations of sport videos for video annotation and retrieval task.
Abstract: This paper presents a video OCR system that automatically extracts closed captions from video frames as keywords (or as we called "cues") for building annotations of sport videos. In this system, text regions that contain closed captions are first identified using support vector machines (SVMs). We then enhance the identified text regions by using two groups of asymmetric filters and recognize them using commercial OCR software package. The resulting captions are recorded as cues in XML format for video annotation and retrieval task.

Journal Article
TL;DR: In this paper, the authors report on a system that automates three processes in the creation of closed-captioned television programs: summarization, synchronization, and screen creation, yielding from an electronic manuscript closed caption data applicable to current closed captioned broadcasts.
Abstract: Increasing the number of closed-captioned television programs represents a social responsibility in the sense of providing information. In terms of the system to create closed-captioned television programs by hand, there is considerable hope that the time involved can be reduced and the burden on workers can be eased. The system the authors report on automates three processes in the creation of closed-captioned television programs: summarization, synchronization, and closed-captioned screen creation, yielding from an electronic manuscript closed-caption data applicable to current closed-captioned broadcasts. The authors created closed captions for 12 types of news programs and one documentary program, confirming that the process of creating a closed-captioned television program could be completed in three to six times the program length, excluding the process of creating the electronic manuscript and testing/editing. The authors demonstrate the validity of their system insofar as the time needed to create closed captions using their system was about 70% of the time needed to create closed captions by hand, excluding the process of testing and editing. © 2003 Wiley Periodicals, Inc. Syst Comp Jpn, 34(13): 71–82, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10047

Proceedings Article
26 Aug 2001
TL;DR: The Perseus project, which is devoted to developing techniques and tools for creating personalized multimedia news portals, is described, which combines event mining and tracking on the Internet, commercial detection and recognition in video and audio streams, and selection of relevant news video fragments, based on closed captioning and audio transcripts.
Abstract: This paper describes the Perseus project, which is devoted to developing techniques and tools for creating personalized multimedia news portals. The purpose of a personalized multimedia news portal is to provide relevant information, selected from newswire sites on the Internet and augmented by video clips automatically extracted from TV broadcasts, based on the user's preferences. To create such an intelligent information system several techniques related to textual information retrieval, audio and video segmentation, and topic detection should be developed to work in accord. The approaches to event mining and tracking on the Internet, commercial detection and recognition in video and audio streams, and selection of relevant news video fragments, based on closed captioning and audio transcripts, are described.

Proceedings ArticleDOI
08 Feb 2001
TL;DR: This paper describes how services are inserted into and transported within the bitstream, and the challenges that must be overcome in order to provide correctly formatted and synchronized captioning.
Abstract: The digital television (DTV) transport stream is designed to accommodate NTSC and DTV caption services. This paper describes how services are inserted into and transported within the bitstream, and the challenges that must be overcome in order to provide correctly formatted and synchronized captioning. Caption service transmission starts at the caption-encoding head-end feeding the DTV encoder (MPEG-II video compression), and ends at the decoding hardware in the DTV receiver. Obstacles to be overcome include ensuring system integration, minimizing codec latency and maintaining synchronization. Awareness of these concerns is imperative for engineers and management in the digital video industry.


Patent
08 Aug 2001
TL;DR: In this article, a computer system is used to automatically search closed-captioned television for information requested by a user (i.e., a keyword) in real time.
Abstract: A computer system is used to automatically search closed-captioned television for information requested by a user ( ie a keyword). The system may be front ended by an Internet site, where closed-captioned television programming is searched in real time for the requested information. Upon finding the requested information, the user is notified by email, voice mail etc, as to the programme name, broadcast time, broadcast channel. The user may also access a video segment of, or additional textual information from, the identified programme.

Proceedings Article
01 Jan 2001
TL;DR: A collaboration between Bell Labs and NHK (Japan Broadcasting Corp.) STRL to develop a real-time large vocabulary speech recognition system for live closed-captioning of NHK news programs that delivers less than 2% of word error rate on studio news conditions and about 5% of WER on noisy news and reporter speech when evaluated on a real broadcast news program.
Abstract: This paper describes a collaboration between Bell Labs and NHK (Japan Broadcasting Corp.) STRL to develop a real-time large vocabulary speech recognition system for live closed-captioning of NHK news programs. Bell Labs broadcast news recognition engine consists of a two-pass decoder using bigram language models (LM) and right biphone models during the first pass, and trigram LM with within-word triphone models in the second pass. Various pruning strategies are used to achieve real time decoding, together with a noise compensation procedure aimed at improving recognition on noisy segments of the program. The system operates in a real-time mode and delivers less than 2% of word error rate (WER) on studio news conditions and about 5% of WER on noisy news and reporter speech when evaluated on a real broadcast news program.

Patent
06 Jul 2001
TL;DR: In this paper, a system and method for inserting interactive content into a TV signal by relying on a portion of the TV signal (e.g., closed captioning data) to dynamically and automatically incorporate pertinent interactive content in the TV signals is described.
Abstract: A system and method are disclosed for inserting interactive content into a TV signal by relying on a portion of the TV signal (e.g., closed captioning data) to dynamically and automatically incorporate pertinent interactive content into the TV signal. In one embodiment, a closed captioning decoder (34) decodes the closed captioning information being transmitted along with the TV signal. A processor (36) compares the decoded data with an index of terms and/or phrases. If a match is found, the processor retrieves corresponding ITV data (either the interactive content information itself or suitable ITV data) and transmits the data to an ITV data encoder (37), which encodes the content into the TV signal prior to broadcast.

Patent
12 Sep 2001
TL;DR: In this article, an abbreviated blanking period was proposed to increase the amount of time available for sending data in each scan line, enabling the system to send more data over each channel, including low bandwidth, non-timing information over one or more channels of the digital video link.
Abstract: One embodiment of the present invention uses an abbreviated blanking period, in comparison to the standard VESA and CEA-EIA blanking periods, in order to send data, including low bandwidth, non-timing information, over one or more channels of the digital video link. By shortening the blanking period, the amount of time available for sending data in each scan line is increased, enabling the system to send more data over each channel. The inactive video portion of a scan line sent during vertical sync may also be used to send additional digital data. Shortening the blanking periods and/or using the inactive video sections of the horizontal scan lines adds to the overall data capacity of the link and may be used to send other digital data, such as multi-channel audio, video, control, timing, closed captioning or other digital data.

Book ChapterDOI
24 Oct 2001
TL;DR: Compared to other systems mainly dependent on visual features, the proposed scheme could retrieve more semantically relevant articles quite well and achieve time alignment between CC texts and video data.
Abstract: In this paper, we propose a new method for searching and browsing news videos, based on multi-modal approach. In the proposed scheme, we use closed caption (CC) data to index the contents of TV news articles effectively. To achieve time alignment between the CC texts and video data, which is necessary for multi-modal search and visualization, supervised speech recognition technique is employed. In our implementations, we provide two different mechanisms for news video browsing. One is to use a textual query based search engine, and the other is to use topic based browser which acts as an assistant tool for finding the desired news articles. Compared to other systems mainly dependent on visual features, the proposed scheme could retrieve more semantically relevant articles quite well.

Patent
04 Oct 2001
TL;DR: In this article, a user profile is applied to the auxiliary signal to determine if the television program content is of interest to the PC user, and the content is alerted and the material stored.
Abstract: A method for operating a television system (10) includes the steps of generating a television program (21) and an auxiliary signal descriptive of the content components (67) or clips of the television program (21). The television program content and auxiliary signal (67) are received via a receiver board in a PC (57), and are temporarily stored (56). A user profile (63) is applied to the auxiliary signal to determine if the television program content is of interest to the PC user (50). If the content is of interest, the PC is alerted and the material stored. The auxiliary signal (67) may be imbedded in the broadcast stream (30) based on information created at the station as the TV program is created. Each video clip (21) is described based on the station's information such as scripts, closed caption text, or it may be created by a computer (30) at the TV station (10) using interpretative software or manually at the source. A television system includes a television station (12) having a transmitter (28), a television program source (20) linked to the transmitter, and a descriptive data source (30) linked to the transmitter (28).