scispace - formally typeset
Search or ask a question

Showing papers on "Closed captioning published in 1996"


Journal ArticleDOI
TL;DR: This study summarizes an extensive research project on closed-captioned television, where caption rates among program types varied considerably and commonly used words in captioning and their frequency of appearance were analyzed.
Abstract: This study summarizes an extensive research project on closed-captioned television. Caption data were recorded from 205 television programs. Both roll-up and pop-on captions were analyzed. In the first part of the study, captions were edited to remove commercials and then processed by computer to get caption speed data. Caption rates among program types varied considerably. The average caption speed for all programs was 141 words per minute, with program extremes of 74 and 231 words per minute. The second part of the study determined the amount of editing being done to program scripts. Ten-minute segments from two different shows in each of 13 program categories were analyzed by comparing the caption script to the program audio. The percentage of script edited out ranged from 0% (in instances of verbatim captioning) to 19%. In the third part of the study, commonly used words in captioning and their frequency of appearance were analyzed. All words from all the programs in the study were combined into one large computer file. This file, which contained 834,726 words, was sorted and found to contain 16,102 unique words.

83 citations


Proceedings ArticleDOI
01 Apr 1996
TL;DR: The goal of the VISION (Video Indexing for SearchIng Over Networks) project is to establish a comprehensive, online digital videolibrary by developing automatic mechanisms to populate the library and provide content-based search and retrieval overcomputer networks.
Abstract: The goal of the VISION (Video Indexing for SearchIng Over Networks) project is to establish a comprehensive, online digital video library. We are developing automatic mechanisms to populate the library and provide content-based search and retrieval over computer networks. The salient feature of our approach is the integrated application of mature image or video processing, information retrieval, speech feature extraction and word-spotting technologies for efficient creation and exploration of the library materials. First, full-motion video is captured in real-time with flexible qualities to meet the requirements of library patrons connected via a wide range of network bandwidths. Then, the videos are automatically segmented into a number of logically meaningful video clips by our novel two-step algorithm based on video and audio contents. A closed caption decoder and/or word-spotter is being incorporated into the system to extract textual information to index the video clips by their contents. Finally, all information is stored in a full-text information retrieval system for content-baaed exploration of the library over networks of varying bandwidths.

83 citations


Patent
27 Jun 1996
TL;DR: In this paper, a syntax for communicating VBI user information for digital television is provided, which allows the transport of closed captions, non-real-time video, sampled video and AMOL.
Abstract: A method and apparatus are provided for communicating VBI user information for digital television. A syntax is provided which allows the transport of closed captions, non-realtime video, sampled video and AMOL. Non-realtime video can be used to transport various types of data, such as a vertical interval test signal (VITS) through the system at full resolution. The provision in the syntax of a count for each type of VBI user information enables the adjustment of a digital television data stream to accommodate variable amounts and types of such information without space being reserved in advance. The provision of a priority number in the syntax enables decoders to discard VBI user information priorities which are not supported by the particular decoder.

64 citations


Patent
13 Sep 1996
TL;DR: In this paper, a passenger control handset (PCH) in Braille is utilized to transmit keystrokes to a seat electronics unit (SEU) as keystroke signals and processed by a device driver.
Abstract: Audio menuing for the visually impaired, closed captioning for the hearing impaired and graphical tab control user interface for an interactive flight entertainment system (IFES). With the preferred embodiment, passenger control handset (PCH) in Braille is utilized. Given inputs from the passenger through use of the PCH, the keystrokes are transmitted to a seat electronics unit (SEU) as keystroke signals and processed by a device driver. The device driver transmits the keystroke signals to the pre-existing user interface and to the present invention's audio menu module. For the audio menu module, the menu resource database makes available a file of various audio information corresponding to various passenger keystroke inputs. Once the appropriate passenger output information is retrieved from the menu resource database, the information is output to the passenger via a display device and a headset coupled to the SEU. A closed captioning capability is enabled when a passenger selects a closed captioning option icon on the screen of the display device. Once such selection is made, audio information is printed and displayed to the passenger on the screen of the display device. A touch screen user interface having graphical tab controls for paging is also provided.

59 citations


Proceedings ArticleDOI
Rakesh Mohan1
TL;DR: In this paper, the authors present a system that automatically captures and processes TV news programs into a database that can be searched over the internet by submitting simple English queries, which is a hyperlinked list of matching news stories.
Abstract: Our goal is to enable viewers to access TV programs based on their content. Towards this end, we present a system that automatically captures and processes TV news programs into a database that can be searched over the internet. Users browse this database by submitting simple English queries. The results of the query is a hyperlinked list of matching news stories. Clicking on any item in the list immediately launches a video of the pertinent part of the news broadcast. We segment TV news broadcasts into distinct news stories. We then index each story as a separate entity. In reply to a query, videos for these news stories are displayed rather than the whole TV program. News program s ar usually accompanied by a transcript in closed caption text. The closed caption text contains markers for story boundaries. Due to the live nature of TV news programs, the closed caption lags the actual audio/video by varying amounts of time up to a few seconds. The closed caption text, thus, has to be shifted to be aligned in time to the video. We use video and audio events to do this synchronization. The closed caption for each story is entered into a database. In response to a query, the database retrieves and ranks the matching closed caption stores. An HTML document is returned to the user which lists: 1) the name and time of the news program that this story belongs to, 2) thumbnails providing a visual summary of the story, 3) closed caption text. To view a news story, the user simply clicks on an item form the list and the video for that story is streamed onto a media player at the user side. This system maintains the manner of presentation of the media, namely video for TV programs, while allowing the common search and selection techniques used on the web.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

57 citations


Patent
08 Mar 1996
TL;DR: In this article, an electronic discussion group includes interaction with closed captioning from a media program, in substantially real-time, and video clips from the program may also be displayed on users' computer terminals.
Abstract: An electronic discussion group includes interaction with closed captioning (10) from a media program, in substantially real time. Video clips from the program may also be displayed on users' computer terminals (5).

33 citations


Patent
24 Oct 1996
TL;DR: In this paper, a video signal processing system produces an output signal suitable for producing a closed caption display and is responsive to a "freeze" command from a user for modifying generation of the output signal such that the content of the closed caption region of the display does not change.
Abstract: A video signal processing system produces an output signal suitable for producing a closed caption display and is responsive to a "freeze" command from a user for modifying generation of the output signal such that the content of the closed caption region of the display does not change. The system provides for user selection of various freeze modes including one in which just the caption region of a display is frozen, another in which just the video region of the display is frozen, and another in which both the caption and video regions of the display are frozen.

31 citations


Patent
07 Nov 1996
TL;DR: In this article, a television system in which at least program title information for programs which are to be transmitted in the future is transmitted in advance to form a channel guide listing is described.
Abstract: In a television system in which at least program title information for programs which are to be transmitted in the future is transmitted in advance to form a channel guide listing, apparatus is provided for acquiring one of the title information and the current date, and generating display signal comprising data representing a text screen containing one of the title information and the current date for recording a user-viewable screen display on a video tape ahead of the television program signal. The title or date information acting as a leader to the following television program. In a second embodiment of the invention, in those instances where descriptive text accompanies the program listing, apparatus of the invention records the descriptive text relating to the title, the star, the director, or the context of the program.

29 citations


18 Apr 1996
TL;DR: This paper examines 22 empirical computer-assisted language learning (CALL) studies published between 1989 and 1994, and 13 reviews and syntheses published between 1987 and 1992, pertaining to CALL in higher education in the United States, and provides three general conclusions.
Abstract: This paper examines 22 empirical computer-assisted language learning (CALL) studies published between 1989 and 1994, and 13 reviews and syntheses published between 1987 and 1992, pertaining to CALL in higher education in the United States. A "three streams" framework helps to place CALL in a larger context and illustrate its several dimensions. Any specific CALL program involves decisions in relation to developments in at lea.it three fields: educational psychology; linguistics; and computer technology. These three fields may be conceptualized as streams, where each stream flows more or less independently of the others, but where the practice of CALL at any given time requires making a passage across all three. An interpretive summary of five major findings from the review of the empirical CALL studies is offered: (1) captioning video segments can dramatically boost student comprehension; (2) CALL can connect students with other people inside and outside of the classroom, promoting natural and spontaneous communication in the target language; (3) the type of CALL fe-!dback provided to students can play a central role in learning; (4) student attitudes toward CALL are not consistently linked to student achievement using CALL; and (5) CALL can substantially improve achievement as compared with traditional instruction. This paper also provides three general conclusions, each accompanied by recommendations for future CALL practice and research. Appendices include the material sear-h procedure; captioning information; supplementary findings from the empirical studies; individual summaries of empirical studies; and individual summaries of CALL and Computer-Assistcd Instruction (CAI) reviews. (Contains 43 references.) (Author/AEF) *********************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. **********************************************************************

16 citations




Journal Article
TL;DR: In this paper, the authors discuss three avenues of research on technology for science, engineering and mathematics education for deaf students at the National Technical Institute for the Deaf at Rochester Institute of Technology.
Abstract: It comes as no surprise that when deaf adolescents are asked to rate characteristics of effective teachers, they place a high importance on the visual representation of course content during lectures (Lang, McKee & Conner, 1993). Mediated instruction has been advocated by effective teachers ever since the earliest forms of transparency and slide projections, and motion picture films, have been introduced. As new forms of technology enhanced the general living conditions of deaf people as well, educators have applied them to the classroom. Such was the case, for example, when the acoustic telephone coupler was designed by three deaf inventors in 1964. Shortly after the modem came out, the large, noisy, 250-pound tele- typewriters were being lugged into classrooms across the country to provide primary and incidental language learning experiences for deaf students. Not until recently, however, has technology shown great promise to become an integral component of classroom instruction for deaf students. In this paper, I will discuss three avenues of research on technology for science, engineering and mathematics education for deaf students at the National Technical Institute for the Deaf at Rochester Institute of Technology. These arenas of technological research include: 1) direct instruction in the classroom through multimedia approaches; 2) assistive device technologies for enhancing access to classroom lectures in mainstream classes; and 3) use of technology for networking in teacher preparation. Over the past decade, the use of captioning technology has greatly improved access to information in the science, engineering and mathematics classrooms for deaf students. There are at least two kinds of access through captions which play an important part in learning. First, in the classroom, there is what I call primary access to science films through either open or closed captions. Captioned films are much easier to obtain today and many commercial publishers offer captioned versions of their educational media. Second, there are many improved opportunities for what I call incidental learning of science, through closed captioned television shows, for example. Programs such as "Bill Nye, the Science Guy" can be copied directly from television and permission is given to teachers to use the tapes in their classes for up to three years. Science-related films are frequently seen on regular broadcast and cable television channels and the opportunity for deaf students to learn science, as well as English language skills, through informal viewing shows great promise. Computer software is available for custom captioning as well. One of the principal research concerns at this time includes the effect on learning when verbatim or edited captions are used. In one study conducted at NTID by Hertzog, Stinson, and Keiffer (1989), 32 deaf engineering technologies students viewed two captioned versions of a film about cement manufacturing. Both high and low reading groups benefited from instruction when the captions were on an 8th grade level, while only the high reading group benefited from the 11th-grade level captions. This study shows how the development of technology along with a sound educational research program may lead to optimal teaching and learning strategies. Computer software for direct instruction of deaf students in science is being experimented with across the country. However, without a solid educational research foundation, new CD-ROM and other computer technologies may flounder without direction as was the case with many earlier attempts at computer assisted instruction (CAI). One study now in progress at NTID involves the Content Independent MultiMedia System (CIMMS). As described by Dowaliby (1996), CIMMS provides an interface between a teacher or instructional developer and HyperCard, the authoring system employed, which performs all of the programming, graphics, and compilation. …

Book ChapterDOI
04 Mar 1996
TL;DR: A prototype system using closed captions has been developed on top of the INQUERY information access system, aimed at integrating speech recognition and information retrieval into a working system.
Abstract: The problem of information overload can be solved by the application of information filtering to the huge amount of data. Information on radio and television can be filtered using speech recognition of the audio track. A prototype system using closed captions has been developed on top of the INQUERY information access system. The challange of integrating speech recognition and information retrieval into a working system is a big one. The open problems are the selection of a document representation model, the recognition and selection of indexing features for speech retrieval and dealing with the erroneous output of recognition processes.

Proceedings ArticleDOI
TL;DR: It is shown how the techniques can be used to automatically index video files based on closed captions with a typical video capture card, for both compressed and uncompressed video files.
Abstract: A data model for long objects (such as video files) is introduced, to support general referencing structures, along with various system implementation strategies. Based on the data model, various indexing techniques for video are then introduced. A set of basic functionalities is described, including all the frame level control, indexing, and video clip editing. We show how the techniques can be used to automatically index video files based on closed captions with a typical video capture card, for both compressed and uncompressed video files. Applications are presented using those indexing techniques in security control and viewers' rating choice, general video search (from laser discs, CD ROMs, and regular disks), training videos, and video based user or system manuals.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.