scispace - formally typeset
Search or ask a question

Showing papers on "Closed captioning published in 1998"


Patent
27 Oct 1998
TL;DR: In this article, a decoder extracts the URLs from the television signal, and supplies the URLs to a retrieval device which automatically retrieves corresponding web pages or other similar information over a network.
Abstract: Uniform Resource Locators (URLs) or other network information identifiers are transmitted with television signals in order to permit web content to be displayed in synchronization with television programming. In an illustrative embodiment, URLs are embedded in a closed caption portion of a transmitted television signal, and delimited from the closed caption text using predetermined delimiting characters. A decoder extracts the URLs from the television signal, and supplies the URLs to a retrieval device which automatically retrieves corresponding web pages or other similar information over a network. The retrieved web pages are then displayed to a viewer in synchronization with related programming in the television signal. The retrieval device may be a set-top box associated with a television set that displays both a retrieved web page and the corresponding television picture portion of the television signal. Alternatively, the retrieval device may be a computer which retrieves and displays a web page, while the corresponding television picture is displayed on a television set.

264 citations


Patent
05 Jan 1998
TL;DR: In this paper, a parser parses the closed captioning script to identify a set of unique phrases, with each phrase having the same number of words, and then the parser creates a key phrase data file which contains a listing of the key phrases and their association to the supplemental data.
Abstract: A system and method uses the closed captioning script to synchronize supplemental data with specified junctures in a video program. A parser parses the closed captioning script to identify a set of unique phrases, with each phrase having the same number of words. A program producer decides what points in the video program to introduce enhancing content. The producer associates supplemental data used to activate the enhancing content with specific key phrases of the closed captioning script that correspond to the desired points in the program. The parser creates a key phrase data file which contains a listing of the key phrases and their association to the supplemental data. The key phrase data file is delivered to viewer computing units at user's homes. When the program is played, the viewer computing unit monitors the closed captioning script to detect the key phrases listed in the key phrase data file.

229 citations


Proceedings ArticleDOI
22 Apr 1998
TL;DR: This paper explains how the Informedia system takes advantage of the closed captioning frequently broadcast with the news, how it extracts timing information by aligning the closed-captions with the result of the speech recognition, and how the system integrates closed-caption cues with the results of image and audio processing.
Abstract: The Informedia Digital Library Project allows full content indexing and retrieval of text, audio and video material. Segmentation is an integral process in the Informedia digital video library. The success of the Informedia project hinges on two critical assumptions: that we can extract sufficiently accurate speech recognition transcripts from the broadcast audio and that we can segment the broadcast into video paragraphs, or stories, that are useful for information retrieval. In previous papers we have shown that speech recognition is sufficient for information retrieval of pre-segmented video news stories. We now address the issue of segmentation and demonstrate that a fully automatic system can extract story boundaries using available audio, video and closed-captioning cues. The story segmentation step for the Informedia Digital Video Library splits full-length news broadcasts into individual news stories. During this phase the system also labels commercials as separate "stories". We explain how the Informedia system takes advantage of the closed captioning frequently broadcast with the news, how it extracts timing information by aligning the closed-captions with the result of the speech recognition, and how the system integrates closed-caption cues with the results of image and audio processing.

224 citations


Patent
17 Jun 1998
TL;DR: In this article, a method and apparatus for communicating logical addresses within a broadcast television signal are provided, where a sequence of data complying with a predetermined syntax and including the logical address is embedded in either a text service channel (e.g., T 1, T 2, T 3, T 4 ) or a captioning service channel(e. g., CC 1, CC 2, CC 3, CC 4 ) of the vertical blanking interval (VBI) of the video signal.
Abstract: A method and apparatus for communicating logical addresses within a broadcast television signal are provided. According to one aspect of the present invention, a logical address of a resource, e.g., a Uniform Resource Locator (URL), may be communicated to a receiving device, such as a set-top box, by way of a data service channel of a video signal. A sequence of data complying with a predetermined syntax and including the logical address is embedded in either a text service channel (e.g., T 1 , T 2 , T 3 , T 4 ) or a captioning service channel (e.g., CC 1 , CC 2 , CC 3 , CC 4 ) of the vertical blanking interval (VBI) of the video signal. According to another aspect of the present invention, a logical address of a resource may be received by way of a data service channel of a video signal. A video signal including data associated with one or more data services is received. Subsequently, a sequence of data complying with a predetermined syntax is retrieved from either a captioning service or a text service. Ultimately, a logical address may be extracted from the video signal by parsing the sequence of data. importantly, features of the present invention are applicable to many broadcast television (TV) systems including National Television Standards Committee (NTSC), Phase Alternate Lines (PAL), and Sequential Couleur Avec Memoire (SECAM) as well as the proposed High Definition Television (HDTV) standard. Further, the present invention is transport-independent thereby allowing a variety of transport mechanisms, such as analog cable, digital satellite, digital TV, cable TV and others, to be employed.

193 citations


Proceedings ArticleDOI
D.C. Gibbon1
23 Feb 1998
TL;DR: The techniques presented can produce high quality hypermedia documents of video programs with little or no additional manual effort.
Abstract: This paper presents a method of automatically creating hypermedia documents from conventional transcriptions of television programs. Using parallel text alignment techniques, the temporal information derived from the closed caption signal is exploited to convert the transcription into a synchronized text stream. Given this text stream, we can create links between the transcription and the image and audio media streams. We describe a two-pass method for aligning parallel texts that first uses dynamic programming techniques to maximize the number of corresponding words (by minimizing the word edit distance). The second stage converts the word alignment into a sentence alignment, taking into account the cases of sentence split and merge. We present results of text alignment on a database of 610 programs (including three television news programs over a one-year period) for which we have closed caption, transcript, audio and image streams. The techniques presented can produce high quality hypermedia documents of video programs with little or no additional manual effort.

120 citations


Journal ArticleDOI
TL;DR: Video Segments captioned at different speeds were shown to a group of 578 people that included deaf, hard of hearing, and hearing viewers, and participants used a five-point scale to assess each segment's caption speed.
Abstract: Video Segments captioned at different speeds were shown to a group of 578 people that included deaf, hard of hearing, and hearing viewers. Participants used a five-point scale to assess each segment's caption speed. The "OK" speed, defined as the rate at which "caption speed is comfortable to me," was found to be about 145 words per minute (WPM), very close to the 141 WPM mean rate actually found in television programs (Jensema, McCann, & Ramsey, 1996). Participants adapted well to increasing caption speeds. Most apparently had little trouble with the captions until the rate was at least 170 WPM. Hearing people wanted slightly slower captions. However, this apparently related to how often they watched captioned television. Frequent viewers were comfortable with faster captions. Age and sex were not related to caption speed preference; nor was education, with the exception that people who had attended graduate school showed evidence that they might prefer slightly faster captions.

78 citations


Patent
22 Sep 1998
TL;DR: In this paper, a text data extraction system analyzes one or more interleaved video data streams and parses the stream(s) to extract text data from text data packets.
Abstract: A text data extraction system analyzes one or more interleaved video data streams and parses the stream(s) to extract text data from text data packets. In addition, presentation time data is extracted to facilitate independent use of the text data from corresponding video data. Extracted time coded text data is stored so that the presentation time data can be used to link the extracted text data back to the corresponding video data to facilitate for example: annotation of a movie, text searching of closed caption text, printing transcripts of closed caption text, controlling video playback, such as the order in which scenes are played back, and any other suitable navigation or manipulation of video images or text data.

51 citations


Patent
29 Apr 1998
TL;DR: In this article, a client computer accesses a list of key text data having entries, each of which includes key text that is included in the closed captioning data of a particular program and that is distinctive to the program.
Abstract: Presenting to a viewer additional information corresponding to a television program by recognizing key text data included in closed captioning is disclosed. A client computer that is capable of displaying television programming to a viewer and retrieving information from the Internet or from another network receives broadcast data including a program and closed captioning data. The client computer accesses a list of key text data having entries, each of which includes key text that is included in the closed captioning data of a particular program and that is distinctive to the program. The entries in the list of key text data further include instructions enabling the client computer to retrieve the additional information corresponding to the programs. The client computer decodes the closed captioning data and compares it to the key text data entries. When a match is identified, the client computer system executes the instructions included in the entry that has been matched. The instructions typically result in a viewer-selectable link being displayed on the display device. When the viewer selects the link, the client computer retrieves the additional information from a remote server computer and displays the additional information to the viewer.

34 citations


Proceedings ArticleDOI
01 Mar 1998
TL;DR: A practical digital video database system based on language and image analysis with components from digital video processing, still image search, information retrieval, closed captioning processing is integrated.
Abstract: We integrated a practical digital video database system based on language and image analysis with components from digital video processing, still image search, information retrieval, closed captioning processing. The attempt is to utilize the multiple modalities of information in video and implement data fusion among the multiple modalities; image information, speech/dialog information, closed captioning information, sound track information such as music, gunfire, explosion, caption information, motion information, temporal information. Effort is made to allow access video contents at different levels including video program level, scene level, shot level, and object level. Approaches of browsing, subject-based classification, and random retrieving are available to gain access to the contents.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

30 citations


Patent
02 Apr 1998
TL;DR: In this article, a video communications device is presented that includes a camera and a teletype device (TTY) for transmitting and receiving teletype information in a video-conferencing arrangement.
Abstract: A video communications device used as part of a communication terminal in a video-conferencing arrangement provides the capability of real-time captioning along with real-time visual communication for the individuals that are hearing- or language-impaired and others whose speech is not understandable or non-existent. The device enhances the ability of people with communication disabilities to communicate quickly and effectively with those who are similarly afflicted as well as with those who are not. In one example embodiment, the video communications device includes a camera and a teletype device (TTY) for transmitting and receiving teletype information. The camera captures local images and generates a set of video signals representing those images. A teletype device captures input data from a user and generates a set of data signals. The device can be configured for compatibility with conventional equipment and for alerting users of incoming calls nonaudibly.

28 citations


Patent
David J. Matz1, James P. Ketrenos1
10 Sep 1998
TL;DR: In this article, the audio portion of the broadcast television programming, in closed caption script format, is parsed and compared to a key word database which may be preprogrammed by the user.
Abstract: An interactive broadcast may include television programming and associated services such as closed caption scripts. The audio portion of the broadcast television programming, in closed caption script format, is parsed and compared to a key word database which may be preprogrammed by the user. The closed caption text may be displayed in real time. When a match is identified, an event is fired. The particular event which occurs and the key word may be programmed by the user.


01 Jan 1998
TL;DR: This thesis demonstrates that computed alignment of media objects is practical and can provide immediate solutions to many information retrieval and content presentation problems, and introduces a new area for research in media data analysis.
Abstract: This thesis introduces multiple media correlation, a new technology for the automatic alignment of multiple media objects such as text, audio, and video. This research began with the question: what can be learned when multiple multimedia components are analyzed simultaneously? Most ongoing research in computational multimedia has focused on queries, indexing, and retrieval within a single media type. Video is compressed and searched independently of audio, text is indexed without regard to temporal relationships it may have to other media data. Multiple media correlation provides a framework for locating and exploiting correlations between multiple, potentially heterogeneous, media streams. The goal is computed synchronization, the determination of temporal and spatial alignments that optimize a correlation function and indicate commonality and synchronization between media objects. The model also provides a basis for comparison of media in unrelated domains. There are many real-world applications for this technology, including speaker localization, musical score alignment, and degraded media realignment. Two applications, text-to-speech alignment and parallel text alignment, are described in detail with experimental validation. Text-to-speech alignment computes the alignment between a textual transcript and speech-based audio. The presented solutions are effective for a wide variety of content and are useful not only for retrieval of content, but in support of automatic captioning of movies and video. Parallel text alignment provides a tool for the comparison of alternative translations of the same document that is particularly useful to the classics scholar interested in comparing translation techniques or styles. The results presented in this thesis include (a) new media models more useful in analysis applications, (b) a theoretical model for multiple media correlation, (c) two practical application solutions that have wide-spread applicability, and (d) Xtrieve, a multimedia database retrieval system that demonstrates this new technology and demonstrates application of multiple media correlation to information retrieval. This thesis demonstrates that computed alignment of media objects is practical and can provide immediate solutions to many information retrieval and content presentation problems. It also introduces a new area for research in media data analysis.

Patent
John Alson Hicks1
26 Aug 1998
TL;DR: Signals corresponding to Uniform Resource Locators (URL's) are inserted into a broadcast television signal, advantageously in portion of the vertical blanking interval available for closed captioning information.
Abstract: Signals corresponding to Uniform Resource Locators (URL's) are inserted directly into a broadcast television signal, advantageously in portion of the vertical blanking interval available for closed captioning information When a television signal with the embedded URL is received in an Internet capable television, the presence of the URL is indicated to the viewer, such as by causing a small, transparent, overlay icon to be displayed on the TV screen When a viewer thereafter actuates a special button on their wireless (infrared), or wired, remote control, the URL information contained in the television signal is extracted and sent to the portion of the television, or auxiliary device, used to establish an Internet connection The viewer is then able to gain access to the Internet site having the address specified by the URL, which site will generally contain information associated with the television programming

Patent
10 Jul 1998
TL;DR: In this article, a caption type language learning system utilizing a communication network is disclosed, which includes a captioning language training network server 11 for data-basing the data for respective learning fields.
Abstract: A caption type language learning system utilizing a communication network is disclosed. The system includes a captioning language training network server 11 for data-basing the data for respective learning fields. A communication switching station 12 receives the captioning language learning data through the network. A satellite switching station 13 transmits the data suitable to the communication characteristics through a communication network. The system further includes wire and wireless communication terminals 18, 19, 28, 22, 23, 24, 25, 26 and 27 for receiving captioning language learning data from an external communication network. A wireless communication terminal having a captioning language learning function or a captioning language training terminal 21 receives captioning language learning data through the wire switching station 17 or directly from an external communication network. The terminal includes a modem section 31 for receiving the captioning language learning data from the captioning language learning network server 11 through the wire switching station. A communication interface section 32 receives the data from the wire or wireless terminal or a PC in a form readable by the internal devices. An internal captioning language learning data memory section 33 stores the audio and caption data, and a CODEC section 34 converts the audio data to analogue audio data. The terminal further includes an amplifying section 35 , an LCD driver 7 for driving an LCD display 38 to display the caption data, and a DSP/CPU section 39 for processing the audio and captioning learning data and for controlling the whole terminal. The caption and audio data can be watched and listened without using the conventional captioning type cassette tape and player. Further, the carrying is convenient, and malfunctions do not occur, so that foreign languages can be learned in a convenient way.

Patent
02 Oct 1998
TL;DR: In this article, a method for parsing closed captioning data (or other types of like embedded data, such as extended service data) encoded according to one of at least three syntaxes is provided.
Abstract: A method is provided for parsing closed captioning data (or other types of like embedded data, such as extended service data) encoded according to one of at least three syntaxes. An encoded video signal is received which is hierarchically organized into picture sections of a picture layer. Each picture section contains an encoded picture and at least one corresponding user data section. A beginning of a sequence of bits of the user data section is identified within one of the picture sections. If the identified sequence of bits contains either undefined data or closed captioning data encoded according to either a specific first syntax or closed captioning data encoded according to a specific second syntax, but not closed captioning data encoded according to any other syntax, then the following steps are performed. A byte is extracted from a beginning of the sequence representing a user data length. A byte is also extracted following the user data length byte in the sequence, which represents a user data type. A determination is made whether or not the user data length byte equals a first predefined constant or a second predefined constant, indicating the presence of two closed captioning bytes according to the first and second syntaxes, respectively. If so, then a determination is made as to whether or not the user data type byte equals a third predefined constant. If both these determinations are made in the affirmative, then immediately following the user data type byte in the sequence, two closed captioning bytes are extracted for the corresponding picture. An apparatus for performing the method is also disclosed.

Proceedings Article
01 Jan 1998
TL;DR: This study analyses the possible use of automatic speech recognition (ASR) for the automatic captioning of TV programs and shows how ASR alone can even lower the efficiency of captioning.
Abstract: This study analyses the possible use of automatic speech recognition (ASR) for the automatic captioning of TV programs. Captioning requires: (1) transcribing the spoken words and (2) determining the times at which the caption has to appear and disappear on the screen. These times have to match as closely as possible the corresponding times on the audio signal. Automatic speech recognition can be used to determine both aspects: the spoken words and their times. This paper focuses on the question: would perfect automatic speech recognition systems be able to automate the captioning process? We present quantitative data on the discrepancy between the audio signal and the manually generated captions. We show how ASR alone can even lower the efficiency of captioning. The techniques needed to automate the captioning process are presented.

01 May 1998
TL;DR: The number of hours of television watched had only a minor influence on reading achievement and school success, and it appears that video watching has a positive effect on comprehension, and vocabulary acquisition seems to be positively affected when coupled with text.
Abstract: A study investigated the effect of video and narrative presentations on children's comprehension and vocabulary acquisition. Participants were students in four heterogeneously grouped eighth-grade English classes (n=16, 22, 21, and 11) in a rural school district in southwestern New York. The short story selected was Sir Arthur Conan Doyle's "The Red-Headed League." It was chosen for its difficulty level--the text is at the instructional level of most of the students involved. Each class received a different mode of instruction: one class read the story to themselves; another class viewed a video rendition of the story; another class saw the same video but had captions included on the screen; the final class both read the text version to themselves during class and then viewed the video the following class period. A pretest (a matching test) and a posttest (the same matching test with answers in a different order, a series of multiple choice questions to measure comprehension and recall, and a short-answer evaluation question to measure critical thinking) were given. Significant findings are that students who read the text had greater vocabulary acquisition, while students who viewed the video showed a greater comprehension of the story. It appears that video watching has a positive effect on comprehension, and vocabulary acquisition seems to be positively affected when coupled with text. Closed captioning is a recent positive addition to teaching reading through television and video. (Contains two figures; 14 references; and sample pretest and posttests.) (NKA) ******************************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. ****************************;*************************************************** Comparison of Video and Text Narrative Presentations on Comprehension and Vocabulary Acquisition Darcy Podszebka Candee Conklin Mary Apple Amy Windus Paper presented at the SUNYGeheseo Annual Reading and Literacy Research Symposium, Geneseo, NY, May, 1998 U.S. DEPARTMENT OF EDUCATION Office of Educational Research and Improvement ED ATIONAL RESOURCES INFORMATION CENTER (ERIC) This document has been reproduced as received from the person or organization originating it. Minor changes have been made to improve reproduction quality. Points of view or opinions stated in this document do not necessarily represent official OERI position or policy. BEST 1 MI' AVAILABLE e's 1 PERMISSION TO REPRODUCE AND DISSEMINATE THIS MATERIAL HAS BEEN GRANTED BY K4f TO THE EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) Television's Effect on Vocab. and Comp. 2 Television's Effect on Vocabulary and Comprehension The question of what effect television viewing has on children and their reading abilities is one that has been raised by many. This study investigates the effect of video and narrative presentations on children's comprehension and vocabulary acquisition. Related Research In a study of over 4,000 children, Neuman (1980) did not find a difference between television viewers and non-viewers in terms of children's fatigue in the morning, in the quality of homework, nor in concentration on task. Neuman also investigated the effects of a technique called "scripting" (1980). In scripting, students view television programs in the classroom while at the same time reading along with a printed script. Teachers pre-teach word analysis and comprehension skills. After viewing the program, students have the opportunity to act out episodes or use the scripts to produce their own version of a scene. This method involves the students in interesting material and gives them an opportunity for self-expression and creativity. Greenstein (1954) compared the amount of television watched with grade point averages of 67 students. This study found that the number of hours of television watched had only a minor influence on reading achievement and school success. In a study by Smith, adult learners were presented with several television shows that were captioned (1990). These shows included Sesame Street, a soap opera, a courtroom drama, and an episode of Reading Rainbow. The Television's Effect on Vocab. and Comp. 3 Reading Rainbow program was the most successful with regard to language acquisition and language proficiency. Some positive results of the viewing of closed-captioned shows included the use of new and unusual vocabulary during discussion and written exercises that followed the viewing. A study by Gough (1979) found that commercial television versions of popular book series, including Laura Ingalls Wilder's Little House on the Prairie and Mary Norton's The Borrowers, have led children to look for these books specifically. In 1978, a study by Busch found that 89% of students in grades 212 had watched at least one program on television that caused them to read a book (Gough, 1979). This study shows how students can become motivated to read a book after they view characters on television. Having seen programs on television, students become motivated to read to find out more about the characters on the television series. Sesame Street is designed to prepare disadvantaged children for school. After watching ten sequential, hour-long episodes of Sesame Street, Mates and Strommen (1996) gained an understanding of the curriculum taught on the show and subsequently discovered many problems with the show. They described some of the segments as flowing too fast (Number countdown 12-1) for students to understand the concept that is being taught. Also, some of the segments had no educational value (Cookie Monster eating cookies). The researchers present suggestions to improve the educational value of the show. These suggestions include showing the characters reading signs and messages, for the purpose of figuring out where to go or what to do. The

Journal ArticleDOI
TL;DR: A digital DBS system was developed in order to provide TV and data broadcasting services over the Korean Peninsula using Koreasat and its functional requirements, system design, and implementation are introduced.
Abstract: A digital DBS system was developed in order to provide TV and data broadcasting services over the Korean Peninsula using Koreasat. The system was designed and implemented to have studio quality video, CD quality audio, a multilingual broadcasting service, closed captioning service, 4/spl times/3/16/spl times/9 picture aspect ratio, data services supporting up to 2 Mbps, 99.99% system availability, 99% link availability at the worst month, and pay-channel broadcasting with 500 million subscribers. This paper introduces its functional requirements, system design, and implementation.


Proceedings ArticleDOI
10 Aug 1998
TL;DR: An on-going project is described whose primary aim is to establish the technology of producing closed captions for TV news programs efficiently using natural language processing and speech recognition techniques for the benefit of the hearing impaired in Japan.
Abstract: We describe an on-going project whose primary aim is to establish the technology of producing closed captions for TV news programs efficiently using natural language processing and speech recognition techniques for the benefit of the hearing impaired in Japan. The project is supported by the Telecommunications Advancement Organisation of Japan with the help of the ministry of Posts and Telecommunications.We propose natural language and speech processing techniques should be used for efficient closed caption production of TV programs. They enable us to summarise TV news texts into captions automatically, and synchronise TV news texts with speech and video automatically. Then the captions are superimposed on the screen.We propose a combination of shallow methods for the summarisation. For all the sentences in the original text, an importance measure is computed based on key words in the text to determine which sentences are important. If some parts of the sentences are judged unimportant, they are shortened or deleted. We also propose keyword pair model for the synchronisation between text and speech.

Patent
22 Dec 1998
TL;DR: In this article, a decoder (300) separates the closed captioning from the video signal and a closed caption censor (400) receives the separated closed caption from the decoder and censors the same to form censored closed caption.
Abstract: A censoring device (50) to censor closed captioning forming part of a video signal comprises a decoder (300) receiving an incoming video signal including closed captioning. The decoder (300) separates the closed captioning from the video signal. A closed caption censor (400) receives the separated closed captioning from the decoder and censors the same to form censored closed captioning. A generator (500) receives the video signal and the censored closed captioning and combines the censored closed captioning and the video signal.

Proceedings Article
01 Jan 1998
TL;DR: A soundtrack corpus (representing a single genre of television programming) for acoustic analysis and a text corpus (from the same genre) for language modelling indicate that application specific language modelling will be effective for the chosen genre, although a lexicon providing complete lexical coverage is unattainable.
Abstract: The purpose of this research is to investigate methods for applying speech recognition techniques to improve the productivity of off-line captioning for television. We posit that existing corpora for training continuous speech recognisers are unrepresentative of the acoustic conditions of television soundtracks. To evaluate the use of application specific models to this task we have developed a soundtrack corpus (representing a single genre of television programming) for acoustic analysis and a text corpus (from the same genre) for language modelling. These corpora are built from components of the manual captioning process. Captions were used to automatically segment and label the acoustic soundtrack data at sentence level, with manual post-processing to classify and verify the data. The text corpus was derived using automatic processing from approximately 1 million words of caption text. The results confirm the acoustic profile of the task to be characteristically different to that of most other speech recognition tasks (with the soundtrack corpus being almost devoid of clean speech). The text corpus indicates that application specific language modelling will be effective for the chosen genre, although a lexicon providing complete lexical coverage is unattainable. There is a high correspondence between captions and soundtrack speech for the chosen genre, confirming that closed-captions can be a useful data source for generating labelled acoustic data. The corpora provide a high quality resource to support further research into automated speech recognition.

DOI
01 Jan 1998
TL;DR: The paper concludes that Japanese subtitles, though often viewed negatively in English education, can be successfully used in various activities as an effective use of Ll in language lcarning.
Abstract: This paper is an attempt to demonstrate that English and Japanese subtitles can be successfullycombined for use in the language classroom to enhance language learning. FolloxNring a review of the research findings on the use of closed captiens as reading and vocabulary materials, this paper examines the addition of previewing questions and combined use of both subtitles as a possible approach to Tnake closed captions more accessible and comprehensible to learners. In order to get student feedback, a brief survey of 144 students was conducted, and the results indicate that overall, they felt less nervous and learned more target vocabulary items, referring to both subtitles as they needed. The paper concludes that Japanese subtitles, though often viewed negatively in English education, can be successfully used in various activities as an effective use of Ll in language lcarning.

Patent
14 Apr 1998
TL;DR: In this paper, the video program is compressed by selecting the frame as the typical frame from all successive frames, and considerably accurate display of the entire video program can be obtained from all scenes of a video program by allowable information loss by a series of the typical frames.
Abstract: PROBLEM TO BE SOLVED: To automatically and satisfactorily perform compressed expression of a video program by selecting a typical frame or a picture from the video program and connecting the selected frame or the video with components as sound and text, etc., regarding them. SOLUTION: The video program is compressed by selecting the frame as the typical frame from all successive frames. And considerably accurate display of the entire video program is obtained from all scenes of the video program by allowable information loss by a series of the typical frames. In this case, a compression method is to execute sampling based on the contents of the video program. Consequently, all visual information of the original video program is not contained in a series of the selected frames, however the frames are coupled with the sound and closed caption text, etc., which are part of the original program, consequently, it is enough to provide the information of the video program by a compressed format.