scispace - formally typeset
Search or ask a question

Showing papers on "Closed captioning published in 1994"


Patent
Brian John Cragun1, Paul Reuben Day1
21 Jun 1994
TL;DR: In this paper, a closed captioning decoder extracts a closed-captioning digital text stream from a television signal, and a digital processor executing a control program scans the closed captions for words or phrases matching the search parameters, and the corresponding segment of the television broadcast may then be displayed, edited or saved.
Abstract: A television presentation and editing system uses closed captioning text to locate items of interest. A closed captioning decoder extracts a closed captioning digital text stream from a television signal. A viewer specifies one or more keywords to be used as search parameters. A digital processor executing a control program scans the closed captioning digital text stream for words or phrases matching the search parameters. The corresponding segment of the television broadcast may then be displayed, edited or saved. In one mode of operation, the television presentation system may be used to scan one or more television channels unattended, and save items which may be of interest to the viewer. In another mode of operation, the system may be used to assist editing previously stored video by quickly locating segments of interest.

585 citations


Patent
22 Mar 1994
TL;DR: In this article, a system for encoding, storing, and displaying captions includes a modem for transmitting a signal including one or more closed captions generated in real time, a videotape player for playing a video of a television program including both a video signal and a time code signal corresponding to a time position of the video tape, an encoder for receiving the video signals and a caption input and integrating the video signal with the caption input into one integrated signal for transmission, and a computer that receives captions from the modem and a video tape player.
Abstract: A system for encoding, storing, and displaying captions includes a modem for transmitting a signal including one or more closed captions generated in real time, a videotape player for playing a videotape of a television program including both a video signal and a time code signal corresponding to a time position of the video tape, an encoder for receiving the video signal and a caption input and integrating the video signal and the caption input into one integrated signal for transmission, and a computer that receives captions from the modem and a time code output from the video tape player. The computer transmits the received captions to an encoder and simultaneously generates data records storing the received captions with a time code stamp corresponding to the time code signal input when the caption was received. When a playback of the captioned television program is desired, the computer retrieves the stored caption data record and synchronizes the time code stamp of each caption in the data record with the received time code output. Thus, the computer supplies the synchronized caption to the encoder for insertion into the video signal from the videotape player, whereby captions are provided for refeeds of television programs without requiring a second generation recording of the originally captioned feed.

84 citations


Proceedings ArticleDOI
Tennenhouse1, Adam1, Carver1, Houh1, Ismert1, Lindblad1, Stasior1, Wetherall1, Bacher1, Chang1 
15 May 1994
TL;DR: This paper describes a set of computer-participative applications that demonstrate the present day viability of applications that participate in, i.e., actively process, live media-based information.
Abstract: The ViewStation architecture embodies a software-oriented approach to the support of interactive media-based applications. Starting from the premise that the raw media data, e.g., the video pixels themselves, must eventually be made accessible to the application, we have derived a set of architectural guidelines for the design of media processing environments. The resultant ViewStation architecture, as described in this paper, consists of the VuSystem, a complete media programming environment, and the VuNet, a substrate for the acquisition communication and rendering of video and closed caption text. We describe a set of computer-participative applications that demonstrate the present day viability of applications that participate in, i.e., actively process, live media-based information. Early performance results illustrate the affordability and benefits of our software-oriented approach. >

55 citations


Journal ArticleDOI
TL;DR: The test addresses the question of how two types of metacognitive strategies, written and spoken Advance Organizers (AOs) and verbatim Captioning (CP) may facilitate L2 comprehension and recall.
Abstract: The use of metacognitive strategies of learning and instruction such as content abstracts or previews, subtitles and captioning (on-screen foreign language subtitles) have been recurrent pedagogical tools for facilitating foreign language (L2) instruction. New technology has broadened their scope and multiplied the ways in which they can be used in L2 computer-based applications. A pilot test was carried out using a hypermedia instructional application for Spanish: “Operacion Futuro.” The test addresses the question of how two types of metacognitive strategies, written and spoken Advance Organizers (AOs) and verbatim Captioning (CP) may facilitate L2 comprehension and recall.

21 citations


01 Jan 1994
TL;DR: Results indicate that comprehension is higher whenCaptions are color-coded for speaker identification than when captions are black-and-white, and there are no significant differences in comprehension between centered captions and captions with variable placement dependent on location of the speaker.
Abstract: Captioning is the process of providing a synchronized written script (captions) to accompany auditory information. This article describes programs available for captioning digital media on computers, and discusses the results of a study on color-coding and placement of captions. Seventy-two students in the Preparatory Studies Program (PSP) at Gallaudet University (Washington, D.C.) participated in the study (PSP enrolls deaf and hard-of-hearing students and prepares them for college). A 15-minute segment from a Disney film was used in the study. Four ve'rsions of digital captions were prepared: (1) captions color-coded for speaker identification, centered at the bottom of the screen; (2) black and white captions, centered at the bottom of the screen; (3) color-coded captions with placement dependent on the location of the speaker; and (4) black and white captions with placement dependent on the speaker's location. Results indicate that comprehension is higher when captions are color-coded for speaker identification than when captions are black and white. There are no significant differences between centered captions and captions with variable placement dependent on location of the speaker. (AEF) Reproductions supplied by EDRS are the best that can be made from the original document. Digital Captioning: Effects of Color-Coding and Placement in Synchronized Text-Audio Presentations CYNTHIA M. KING Educational Foundations and Research Gallaudet University, 800 Florida Avenue, NE, Washington, DC 20002-3695 cmking@gallua.gallaudet.edu CAROL J. LASASSO Education Gallaudet University, 800 Florida Avenue, NE, Washington, DC 20002-3695 email: cjlasasso@gallua.gallaudet.edu DOUGLAS D. SHORT Institute for Academic Technology 2525 Meridian Parkway, Suite 400, Durham, NC 27713 U 5 DEPARTMENT OF EDUCATION Office of Educahonal Research and improvement EDUCATIONAL RE SOURCES INFORMATION CENTER (ERICI 7Th., document has been reproduced as reCenred Iron, the Deison Oi organnzahon orturnal,ng Minor changes have been made to rnp,ove ,epnaduchon Quality Points Of unew or opnnons stated.. this doco . ment do not neCesSardy reb,esent offical OERI oostfion 00hcy 'PERMISSION TO REPRODUCE THIS MATERIAL HAS BEEN GRANTED BY Gary H. Marks TO THE EDUCATIONAL RESOURCES INFORMATION CENTER IERCl Abstract: Captioning is the process of providing a synchronized written script (captions) to accompany auditory information. This article describes programs available for captioning digital media on computers and reports the results of a study on color-coding and placement of captions. Results indicate that comprehension is higher when captions are color-coded for speaker identification than when captions are black-and-white. There are no significant differences in comprehension between centered captions and captions with variable placement dependent on location of the speaker. Captioning is the process of providing a synchrcnized written script (captions) to accompany auditory information. The most common form of captioning is that for analog television, where captions are embedded in Line 21 of the video signal and displayed only to those who have an external decoder or a television with a built-in decoder (Armon, Glisson, & Goldberg, 1992; Bess, 1993). New methods of captioning, however, are beginning to be developed for digital television and computer-based multimedia products (Armon et al, 1992; Hutchins, 1993; Short, 1992; Short & King, 1994). King (1993) provides an extensive discussion of captioning and why multimedia developers should incorporate it into their products. Additional discussion of issues related to ensuring that deaf and hard-ofhearing people have access to information presented auditorially can be found in Jordan (1992) and Kaplan and De Witt (1994), as well as in electronic resources on the Internet, such as ADA-LAW (ada-law Ovm1 .nodak.edu) and Equal Access to Software and Information (easiOsjuvm.stjohns.edu). The present paper addresses: (a) digital captioning, (b) captioning format research and standards, and (c) effects of color-coding and placement of captions used to represent speaker identification on the comprehension of deaf and hard-of-hearing viewers.

9 citations


01 Mar 1994
TL;DR: A study investigated the attitudes of adult university students of English as a Second Language (ESL) toward use of closed captioned television (CCTV) as an instructional tool, finding most students felt CCTV was beneficial to some extent.
Abstract: A study investigated the attitudes of adult university students of English as a Second Language (ESL) toward use of closed captioned television (CCTV) as an instructional tool. Students at the intermediate (n=51) and advanced (n=55) levels of ESL study in classes using CCTV were administered a questionnaire concerning their perceptions of the method, and 11 faculty members answered a questionnaire about student responses to CCTV, their own experiences with it, and problems associated with its use. Most students indicated they liked both closed-captioned and uncaptioned video, consistent with teacher observations. More advanced students preferred uncaptioned television, which did not agree with teacher perceptions. Most students, at both proficiency levels, felt CCTV was beneficial to some extent. Instructors were more ambivalent about benefits. It is suggested that advanced students liked uncaptioned television because of more proficient listening skills; more intermediate students found captioning distracting. To some extent, it is felt, these perceptions may also be attributed to technical problems with CCTV use, textual flaws in the materials, and teacher attitudes. Some recommendations are made for improving use of CCTV in the second language classroom. The two questionnaires are appended. Contains 36 references. (MSE) *********************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. *********************************************************************** Closed Captioning: Students' Responses By Donald L. Weasenforth -PERMISSION TO REPRODUCE THIS MATERIAL HAS BEEN GRANTED BY "Dor.aNcii, . TO THE EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC).U.S DEPARTMENT OF EDUCATION Orhce of Educahonal Research and Improvement EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) '11jh.s document has been reproduced es rece.yed frorn the person or organuatton offfpnahngn Cf Minor changes have been made to improve reproduchon duality Fchnts of %new or opmons slated in this docth ment do not necesserny represent officfal OERI posshon or pohcy

8 citations


01 Apr 1994
TL;DR: A pilot project, which included 18 elementary students with deafness enrolled in the TRIPOD program within the Burbank (California) Public Schools, applied a personal video captioning technology in a workstation setting to a weekly writing experience that involved translating short American Sign Language video stories into written English captions.
Abstract: The CC School project, which included 18 elementary students with deafness enrolled in the TRIPOD program within the Burbank (California) Public Schools, applied a personal video captioning technology in a workstation setting to a weekly writing experience that involved translating short American Sign Language video stories into written English captions. A typical workstation setup includes a personal computer, two video recorders, a character generator, and a video monitor. The equipment is configured to allow a student to watch a videotape, develop captions, and insert them at the appropriate place on the videotape. Students translated 40 stories over 2 academic years. The pilot project resulted in students demonstrating increases in fluency of writing and improvements in their knowledge of the structural properties of English. This led to a subsequent project in which personal captioning technology is being ,resigned for students with different types of language-related learning needs. Six school programs (three serving students with deafness and three serving students with learning disabilities) arimplementing the program to design and evaluate personal captioning experiences pertinent to the learners' needs. Goals for the 3-year project and planned activities for each of the 3 years are listed, emphasizing plans for implementing a computer communication network for electronic mail and conferencing. (Contains 10 references.)

2 citations