scispace - formally typeset
Search or ask a question

Showing papers on "Smacker video published in 1997"


Journal ArticleDOI
TL;DR: The scope of the MPEG-4 video standard is described and the structure of the video verification model under development is outlined, to provide a fully defined core video coding algorithm platform for the development of the standard.
Abstract: The MPEG-4 standardization phase has the mandate to develop algorithms for audio-visual coding allowing for interactivity, high compression, and/or universal accessibility and portability of audio and video content. In addition to the conventional "frame"-based functionalities of the MPEG-1 and MPEG-2 standards, the MPEG-4 video coding algorithm will also support access and manipulation of "objects" within video scenes. The January 1996 MPEG Video Group meeting witnessed the definition of the first version of the MPEG-4 video verification model-a milestone in the development of the MPEG-4 standard. The primary intent of the video verification model is to provide a fully defined core video coding algorithm platform for the development of the standard. As such, the structure of the MPEG-4 video verification model already gives some indication about the tools and algorithms that will be provided by the final MPEG-4 standard. The paper describes the scope of the MPEG-4 video standard and outlines the structure of the MPEG-4 video verification model under development.

670 citations


Patent
08 Nov 1997
TL;DR: In this paper, a video system comprising integrated random access video technologies and video software architectures for the automated selective retrieval of non-sequentially stored parallel, transitional, and overlapping video segments from a single variable content program source, responsive to a viewer's pre-established video content preferences is presented.
Abstract: This invention relates to a video system comprising integrated random access video technologies and video software architectures for the automated selective retrieval of non-sequentially stored parallel, transitional, and overlapping video segments from a single variable content program source, responsive to a viewer's preestablished video content preferences. Embodiments of the video system permit the automatic transmission of the selected segments from a variable content program as a seamless continuous and harmonious video program, and the transmission of the selected segments from an interactive video game further responsive to the logic of the interactive video game. The viewer's video content preferences being stored in the video system, and/or in a compact portable memory device that facilitates the automatic configuration of a second video system. The system's controls also provide an editor of a variable content program the capability for efficiently previewing automatically selected video segments to permit the editor to indicate the inclusion of the selected segments in the program to be viewed by a viewer. The system further integrates fiber optic communications capabilities and the read/write laser disc player capabilities to facilitate the downloading of a variable content program from a source remote to the system.

495 citations


Proceedings ArticleDOI
Shih-Fu Chang1, William Chen1, Horace J. Meng1, Hari Sundaram1, Di Zhong1 
01 Nov 1997
TL;DR: A novel, real-time, interactive system on the Web, based on the visual paradigm, with spatio-temporal attributesplaying a key role in video retrieval, with the user being able to retrieve complex video clips such as those of skiers, baseball players, with ease.
Abstract: The rapidity with which digitat information, particularly video, is being generated, has necessitated the development of tools for efficient search of these media. Content based visual queries have been primarily focussed on still image retrieval. In this papel; we propose a novel, real-time, interactive system on the Web, based on the visual paradigm, with spatio-temporal attributesplaying a key role in video retrieval. We have developed algorithms for automated video object segmentation and tracking and use real-time video editing techniques while responding to user queries. The resulting system pe$orms well, with the user being able to retrieve complex video clips such as those of skiers, baseball players, with ease.

392 citations


Journal ArticleDOI
Minerva M. Yeung1, Boon-Lock Yeo1
TL;DR: This work proposes techniques to analyze video and build a compact pictorial summary for visual presentation and presents a set of video posters, each of which is a compact, visually pleasant, and intuitive representation of the story content.
Abstract: Digital video archives are likely to be accessible on distributed networks which means that the data are subject to network congestion and bandwidth constraints. To enable new applications and services of digital video, it is not only important to develop tools to analyze and browse video, view query results, and formulate better searches, but also to deliver the essence of the material in compact forms. Video visualization describes the joint process of analyzing video and the subsequent derivation of representative visual presentation of the essence of the content. We propose techniques to analyze video and build a compact pictorial summary for visual presentation. A video sequence is thus condensed into a few images-each summarizing the dramatic incident taking place in a meaningful segment of the video. In particular, we present techniques to differentiate the dominance of the content in subdivisions of the segment based on analysis results, select a graphic layout pattern according to the relative dominances, and create a set of video posters, each of which is a compact, visually pleasant, and intuitive representation of the story content. The collection of video posters arranged in temporal order then forms a pictorial summary of the sequence to tell the underlying story. The techniques and compact presentations proposed offer valuable tools for new applications and services of digital video including video browsing, query, search, and retrieval in the digital libraries and over the Internet.

369 citations


Proceedings Article
30 May 1997
TL;DR: In this paper, an integrated solution for computer assisted video parsing and content-based video retrieval and browsing is presented, based on key-frames selected during abstraction and spatial-temporal variations of visual features, as well as shot-level semantics derived from camera operation and motion analysis.
Abstract: This paper presents an integrated solution for computer assisted video parsing and content-based video retrieval and browsing. The uniqueness and effectiveness of this solution lies in its use of video content information provided by a parsing process driven by visual feature analysis. More specifically, parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frames selected during abstraction and spatial-temporal variations of visual features, as well as some shot-level semantics derived from camera operation and motion analysis. These processes, as well as video retrieval and browsing tools, are presented in detail as functions of an integrated system. Also, experimental results on automatic key-frame detection are given.

345 citations


Patent
15 May 1997
TL;DR: In this article, a new technique for extracting a hierarchical decomposition of complex video selection for browsing purposes, combines visual and temporal information to capture the important relations within a scene and between scenes in a video, thus allowing the analysis of the underlying story structure with no a priori knowledge of the content.
Abstract: A new technique for extracting a hierarchical decomposition of a complex video selection for browsing purposes, combines visual and temporal information to capture the important relations within a scene and between scenes in a video, thus allowing the analysis of the underlying story structure with no a priori knowledge of the content. A general model of hierarchical scene transition graph is applied to an implementation for browsing. Video shots are first identified and a collection of key frames is used to represent each video segment. These collections are then classified according to gross visual information. A platform is built on which the video is presented as directed graphs to the user, with each category of video shots represented by a node and each edge denoting a temporal relationship between categories. The analysis and processing of video is carried out directly on the compressed videos. Preliminary tests show that the narrative structure of a video selection can be effectively captured using this technique.

333 citations


Journal ArticleDOI
TL;DR: This paper looks at shot detection and characterization using compressed video data directly and proposes a scheme consisting of comparing intensity, row, and column histograms of successive I frames of MPEG video using the chi-square test.

204 citations


Patent
03 Oct 1997
TL;DR: In this article, a system, apparatus and method for interactively controlling the rate of real-time video playback and audio track playback is disclosed, where a pre-recorded video CD is played in the player in which the display rate of video images is altered via software embedded on the CD such that the speed of the video is changed by the level of activity on the exercise device.
Abstract: A system, apparatus and method for interactively controlling the rate of real-time video playback and audio track playback is disclosed A preferred embodiment of the apparatus is an interactive exercise video system (10) which utilizes a bicycle (14), a bicycle wheel speed detector (22), an interface unit (32) connected to the wheel speed detector (22) and to a conventional game controller connected to a conventional video game CD player (20), which is in turn connected to a TV (18) A prerecorded video CD is played in the player in which the display rate of video images is altered via software embedded on the CD such that the speed of the video is changed by the level of activity on the exercise device The variation of the video frame rate is accomplished by modifying the duration time stamp on each video frame which is used by the player control program so as to change the sequential time at which each frame is called for display by the conventional video player The variation of video display rate is independent of the pitch of the audio play rate To maintain synchronization of the audio with the video without changing the pitch of the audio, portions of the audio are looped back, ie replayed

184 citations


Patent
22 Dec 1997
TL;DR: In this article, a player is selectively attached to the video message file to create an executable file which can be delivered as a standard binary file over conventional communications networks, and the recipient can view the received video e-mail, the recipient executes the received file and the attached player automatically plays the video and audio message.
Abstract: Video messages are created in a manner that allows transparent delivery over any electronic mail (e-mail) system. The audio and video components of the message are recorded, encoded, and synchronously combined into a video message file. A player is selectively attached to the video message file to create an executable file which can be delivered as a standard binary file over conventional communications networks. To view the received video e-mail, the recipient executes the received file and the attached player automatically plays the video and audio message or the recipient executes the previously installed player which plays the video message.

172 citations


Proceedings ArticleDOI
TL;DR: A basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times is developed.
Abstract: Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

142 citations


Patent
20 Mar 1997
TL;DR: In this paper, a system for capturing, storing and retrieving pre-recorded videos recorded and stored in a compressed digital format at a central distribution site is described, where a plurality of remote distribution locations are connected through fibre optic connections to the central distribution sites.
Abstract: A system for capturing, storing and retrieving prerecorded videos recorded and stored in a compressed digital format at a central distribution site is described. A plurality of remote distribution locations are connected through fibre optic connections to the central distribution site. The remote sites may be of one of several types: a video retail store, a cable television (CATV) head end, a factory environment for mass duplication, electronic video mail, or other types. In the case of a video retail store, VHS videotapes, other format videotapes or other video media may be manufactured on demand in as little as three to five minutes for rental or sell-through. A totally automated manufacturing system is described in which the customers can preview and order prerecorded videos for rental and sale from video kiosks. The selected prerecorded video is then either retrieved from local cache storage or downloaded from the central distribution site for manufacturing onto a blank or reused videotape. One feature of the system is the ability to write a two-hour videotape into a Standard Play (SP) format using a high speed recording device. The MPEG-2 closed group of pictures format is used to compress a full-length prerecorded video and audio into a prerecorded video data file of approximately four gigabytes of storage. The prerecorded video data file can be downloaded from the central site to the remote manufacturing site and written onto a standard VHS tape using a parallel decompression engine to write the entire prerecorded video at high speed onto a standard VHS tape in approximately three minutes.

Proceedings ArticleDOI
01 Nov 1997
TL;DR: PanoramaExcerpts is a video browsing interface that shows a catalogue of two types of video icons: panoramic and keyframe icons, which represents the entire visible contents of a scene extended with camera pan or tilt, which is difficult to summarize using a single keyframe.
Abstract: Browsing is a fundamental function in multimedia systems. This paper presents PanoramaExcerpts a video browsing interface that shows a catalogue of two types of video icons: panoramic and keyframe icons. A panoramic icon is synthesized from a video segment taken with camera pan or tilt, and extracted using a camera operation estimation technique. A keyframe icon is extracted to supplement the panoramic icons; a shot-change detection algorithm is used. A panoramic icon represents the entire visible contents of a scene extended with camera pan or tilt, which is difficult to summarize using a single keyframe. For the automatic generation of PanoramaExcerpts, we propose an approach to integrate the following: (a) a shot-change detection method that detects instantaneous cuts as well as dissolves, with adaptive control over the sampling rate for efficient processing; (b) a method for locating segments that contain smooth camera pans or tilts, from which the panoramic icons can be synthesized; and (c) a layout method for packing icons in a space-efficient manner. We also describe the experimental results of the above three methods and the potential applications of PanoramaExcerpts.

Proceedings ArticleDOI
01 Nov 1997
TL;DR: A new method for making correspondences between image clues detected by image analysis and Iangriage clries detected by natural language analysis is proposed and applied to closed-captioned C N N Headline News.
Abstract: Spotting by Association method for video analysis is a novel metliod to detect video segments with typical semantics. Video data contains various kinds of information through continuous images, natural language, and sound. For videos to be stored and retrieved in a Digital Library, it is essential to segment the video data into meaningful pieces. To detect meaningful segments, we need to identify the segment in each modality (video, language, and sound) that corresponds to the same story. For this purpose, we propose a new method for making correspondences between image clues detected by image analysis and Iangriage clries detected by natural language analysis. As a result, relevant video segments with sufficient informat ion froni every modality are obtained. We applied OUT nietliod to closed-captioned C N N Headline News. Video segments with important events, such as a public speech, meeting, or visit. are detc-cted fairly well.

Patent
Hiroyuki Moteki1, Mamoru Kobayashi1
04 Nov 1997
TL;DR: In this article, an audio-video output device that can simultaneously output video and audio information from multiple input sources and which can constantly display the video or audio from one of these sources is presented.
Abstract: To provide an audio-video output device that can simultaneously output video and audio information from multiple input sources and which can constantly display the video or audio information from one of these sources. Input-side selector 55 which can select car navigation video information 2 a, TV video information 3 a, or video information 4 a from a video player, and output-side selectors 56 which supplies the output from said input-side selector 55 and car navigation video information 2 a as video display signals to liquid crystal panel 11 which can be split for display, are provided. Consequently, car navigation video information 2 a is always displayed on panel 11 even when it is split for display. For audio information, monitoring speaker 19 through which car navigation audio information 2 b can be output is provided, in addition to terminal 37 through which the audio for the video being displayed on panel 11 can be output as FM sound. Consequently, car navigation audio information 2 b can be monitored regardless of the type of video information being displayed.

Patent
06 Jun 1997
TL;DR: In this article, a computer based system for displaying and compressing video including a video capture card with a video compressor and a bus interface circuit that acts as a busmaster and outputs uncompressed video and compressed video to a computer bus for display on the computer monitor and storage of compressed video on a memory of the computer.
Abstract: A computer based system for displaying and compressing video including a video capture card with a video compressor and a bus interface circuit that acts as a busmaster and outputs uncompressed video and compressed video to a computer bus for display of the uncompressed video on the computer monitor and storage of the compressed video on a memory of the computer. The system also includes a software virtual interrupt generator that uses timer events provided by a computer system service and a transfer status indicator to generate interrupts to initiate transfer of a new block of video; an overlay controller implemented in software that transfers video from the video capture card over the computer bus to a graphics subsystem for display in a window on the computer monitor in an overlay mode; a display controller implemented in software that causes display of uncompressed video from the video capture card or software decompressed video; a software controller that compresses audio in software and sends video data to be compressed across the computer bus to the compressor; a controller that is implemented in software and calibrates startup delay of the audio input subsystem and uses the delay to synchronize the audio and video; and a user interactive input mechanism for adjusting the rates of compression within a range of acceptable rates that varies as a function of the output target medium for compressed video.

Patent
22 Jul 1997
TL;DR: In this article, an apparatus and method for compressing multiple resolution versions of a video signal is described, where the first video compressor encodes the first resolution version of the video signal to generate a first compressed video bit stream and the second video compressor receives motion vectors or other results of the hierarchical motion estimation (ME) search performed in the first compressor, and uses these results to facilitate the encoding of the reduced resolution version.
Abstract: An apparatus and method for compressing multiple resolution versions of a video signal are disclosed. A first resolution version of a video signal is applied to an input of a first video compressor and to an input of a video scaler. The first video compressor encodes the first resolution version of the video signal to generate a first compressed video bit stream. The video scaler generates a reduced resolution version of the video signal from the first resolution version. The reduced resolution version is supplied to a second video compressor and to the first video compressor. The first video compressor utilizes the reduced resolution version of the video signal in performing a hierarchical motion estimation (ME) search as part of the encoding process for the first resolution version. The second video compressor encodes the reduced resolution version to generate a second compressed bit stream. The second video compressor receives motion vectors or other results of the hierarchical ME search performed in the first video compressor, and uses these results to facilitate the encoding of the reduced resolution version. The apparatus and method may be used in a non-linear video editor, a video server or other video processing system. The video scaler and first and second video compressors may share memory, a transform unit and other processing hardware such that system cost and complexity are reduced.

Patent
18 Jul 1997
TL;DR: In this paper, an improved video recorder/transceiver with expanded functionality, including a capability for storing video and video programs in digital format, editing such programs, transferring such programs onto a hard copy magnetic media, and transmitting such programs to a remote location using a second VCR-ET.
Abstract: An improved video recorder/transceiver with expanded functionality ("VCR-ET") including a capability for storing video and video programs in digital format, editing such programs, transferring such programs onto a hard copy magnetic media, and transmitting such programs to a remote location using a second VCR-ET. The increased functionality is realized through the use of analog to digital conversion, signal compression and intermediate storage in an integrated circuit, random access memory. The recorder/transmitter has capabilities to transmit and receive program information in either a compressed or decompressed format over fiber optic lines, conventional phone lines or microwaves.

Patent
18 Apr 1997
TL;DR: In this article, an apparatus and method for storing and retrieving synchronized audio/video "filmclips" to and from a data file of a multimedia computer workstation includes a storage means for a workstation to store audio and video data as digital data packets to the data file, and retrieval means for the workingstation to retrieve audio and visual data from the file.
Abstract: An apparatus and method for storing and retrieving synchronized audio/video “filmclips” to and from a data file of a multimedia computer workstation includes a storage means for a workstation to store audio and video data as digital data packets to the data file, and retrieval means for the workstation to retrieve audio and video data from the data file. The video data is presented as an image on the display of the workstation, while the audio data is sent to either amplified speakers or headphones. An audio data stream is stored to the data file such that the audio data can be retrieved from the data file and reconstructed into a continuous audio signal. The video data is stored to the data file such that each frame of video data is inserted into the stored audio data stream without affecting the continuity of the audio signal reconstructed by the workstation. Timing information is attached to each frame of video data stored to the file, and indicates a point in the continuous audio data stream which corresponds in time to the frame of video data. A synchronizer displays a frame of video data when the point in the audio data stream, corresponding to the timing information of the retrieved video frame is audibly reproduced by the workstation. The invention also features a video teleconferencing “answering machine” which allows a user to leave an audio/video “filmclip” message on another workstation.

Patent
16 Oct 1997
TL;DR: In this paper, a video method and system maintains a transmission of an audio of a video during a viewer controlled freezing, slowing, and/or zooming of the transmission of the video component by automatically selecting, adjusting audio levels, looping, and producing audio effects.
Abstract: A video method and system maintains a transmission of an audio of a video during a viewer controlled freezing, slowing, and/or zooming of the transmission of the video component of the video by automatically selecting, adjusting audio levels, looping, and producing audio effects, from a plurality of audio elements of the video, such as a foreground and background audio elements, in response to the viewer control of the transmission of the video component.

Patent
24 Oct 1997
TL;DR: In this paper, a block sequence compiler for compiling a sequence of audio and/or video blocks (e.g., audio tracks, MIDI, video clips, animation, etc.) suitable for producing one or more audio or video output sequences (i.e., audio, video, or multimedia) each having a duration corresponding to user-prescribed criteria.
Abstract: A block sequence compiler for compiling a sequence of audio and/or video blocks (e.g., audio tracks, MIDI, video clips, animation, etc.) suitable for producing one or more audio and/or video output sequences (i.e., audio, video, or multimedia) each having a duration corresponding to user-prescribed criteria. In a preferred embodiment, a user chooses an audio and/or video source segment from a predefined library and prescribes the duration of an audio and/or video sequence. Prior to depositing each audio and/or video segment in the library, the segment is partitioned into audio and/or video blocks that are identified in a corresponding characteristic data table with characteristics including (1) duration, (2) suitability for being used as a beginning or ending of an audio and/or video sequence, and (3) compatibility with each block. Using this characteristic table and the user-prescribed criteria, i.e., duration, the block sequence compiler generates a plurality of audio and/or video sequences satisfying the user-prescribed criteria which can be reviewed, e.g., played, and/or saved for future use.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: A technique which combines video and audio information together for classification and indexing purposes is presented, and a general framework that uses the results of such classification is proposed for organizing video information.
Abstract: A challenging problem to construct video databases is the organization of video information. The development of algorithms able to organize video information according to semantic content of the data is getting more and more important. This will allow algorithms such as indexing and retrieval to work more efficiently. Until now, an attempt to extract semantic information has been performed using only video information. As a video sequence is constructed from a 2-D projection of a 3-D scene, video processing has shown its limitations especially in solving problems such as object identification or object tracking, reducing the ability to extract semantic characteristics. A possibility to overcome the problem is to use additional information. The associated audio signal is then the most natural way to obtain this information. This paper presents a technique which combines video and audio information together for classification and indexing purposes. The classification is performed on the audio signal; a general framework that uses the results of such classification is then proposed for organizing video information.

Proceedings ArticleDOI
09 Jun 1997
TL;DR: The schema provides a general framework for video object extraction, indexing, and classification and presents new video segmentation and tracking algorithms based on salient color and affine motion features.
Abstract: Object segmentation and tracking is a key component for new generation of digital video representation, transmission and manipulations. Example applications include content based video database and video editing. We present a general schema for video object modeling, which incorporates low level visual features and hierarchical grouping. The schema provides a general framework for video object extraction, indexing, and classification. In addition, we present new video segmentation and tracking algorithms based on salient color and affine motion features. Color feature is used for intra frame segmentation; affine motion is used for tracking image segments over time. Experimental evaluation results using several test video streams are included.

Patent
06 May 1997
TL;DR: In this paper, a system for simultaneously creating a plurality of individually customized video product from a pluralityof video segments uses a central computer and one or more workstations to control the operation of a video file server and video recorders connected thereto.
Abstract: A system for simultaneously creating a plurality of individually customized video product from a plurality of video segments uses a central computer and one or more workstations to control the operation of a video file server and video recorders connected thereto. The video file server is adapted to simultaneously output the same or different stored video segments on a plurality of video output channels. Each channel is connected to a respective video recorder. An operator at the workstation enters selection choices into a workstation. The central computer uses the selection choices to select and order a subset of video segments prestored on the video file server. Each selected video segment is directly related to a selection choice. Under control of the central computer, the selected and ordered subset of video segments are output from the video file server to a designated video recorder to make the customized video product. The central computer simultaneously controls the state of the video recorders in coordination with the video file server.

Journal ArticleDOI
TL;DR: This paper describes an object-based video coding scheme (OBVC) that was proposed by Texas Instruments to the emerging ISO MPEG-4 video compression standardization effort and describes the error protection and concealment schemes that enable robust transmission of compressed video over noisy communication channels such as analog phone lines and wireless links.
Abstract: This paper describes an object-based video coding scheme (OBVC) that was proposed by Texas Instruments to the emerging ISO MPEG-4 video compression standardization effort. This technique achieves efficient compression by separating coherently moving objects from stationary background and compactly representing their shape, motion, and the content. In addition to providing improved coding efficiency at very low bit rates, the technique provides the ability to selectively encode, decode, and manipulate individual objects in a video stream. This technique supports all three MPEG-4 functionalities tested in the November 1995 tests, namely, improved coding efficiency, error resilience, and content scalability. This paper also describes the error protection and concealment schemes that enable robust transmission of compressed video over noisy communication channels such as analog phone lines and wireless links. The noise introduced by the communication channel is characterized by both burst errors and random bit errors. Applications of this object-based video coding technology include videoconferencing, video telephony, desktop multimedia, and surveillance video.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: This work presents a new system for video object segmentation and tracking using feature fusion and region grouping, and presents efficient techniques for spatio-temporal video query based on the automatically segmented video objects.
Abstract: Object-based video representation provides great promises for new search and editing functionalities. Feature regions in video sequences are automatically segmented, tracked, and grouped to form the basis for content-based video search and higher levels of abstraction. We present a new system for video object segmentation and tracking using feature fusion and region grouping. We also present efficient techniques for spatio-temporal video query based on the automatically segmented video objects.

Proceedings ArticleDOI
TL;DR: A retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents that facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection is introduced.
Abstract: This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
12 Nov 1997
TL;DR: In this paper, a method of allowing the user to control the scale of the horizontal axis and thus the number of video frames that may be symbolically viewed on the workspace window without panning is described.
Abstract: In a digital computer system running a windows operating system and a video editor where the video data is stored in the memory and clips of the video data consisting of sequences of video frames are represented by blocks in a workspace on the video display. A video cursor is provided to scroll through the frames. The disclosure describes a convenient method of allowing the user to control the scale of the horizontal axis and thus the number of video frames that may be symbolically viewed on the workspace window without panning. The method includes the following steps: First, determining if the video cursor is being intentionally moved vertically by a user. Next, determine the vertical distance that the video cursor is moved from a reference point. Finally, adjust the number of video frames that may be represented on the workspace in relationship to the vertical movement of the video cursor.

Patent
23 Jul 1997
TL;DR: In this article, a combination of the hardware implementing data compression and decompression based on a vector quantization algorithm with video input/output port and computer interface integrated on a single semiconductor chip provides for a cost-effective solution to processing of continuous-steam video and audio data in real time.
Abstract: A semiconductor chip integrating various functional blocks of a video codec for use in a system for real time record and playback of motion video through a computer interface such as a PC-compatible parallel port is disclosed An innovative combination of the hardware implementing data compression and decompression based on a vector quantization algorithm with video input/output port and computer interface integrated on a single semiconductor chip provides for a cost-effective solution to processing of continuous-steam video and audio data in real time

Proceedings ArticleDOI
01 Feb 1997
TL;DR: Rigid cellular foams are prepared by catalytically condensing an organic polyisocyanate in the presence of a carbodiimide-promoting catalyst and exhibit improved friability and flame retardancy.
Abstract: Rigid cellular foams are prepared by catalytically condensing an organic polyisocyanate in the presence of (a) a carbodiimide-promoting catalyst, (b) a trimerization catalyst, and (c) a polyfurfuryl alcohol polymer. The resulting foam compositions are characterized by carbodiimide and isocyanurate linkages and exhibit improved friability and flame retardancy.

Journal ArticleDOI
TL;DR: The architecture of AUTEUR, an experimental system that embodies mechanisms to interpret, manipulate and generate video, and the role of themes and semantic fields in the generation of content oriented video scenes are described.
Abstract: This paper considers the automated generation of humorous video sequences from arbitrary video material We present a simplified model of the editing process We then outline our approach to narrativity and visual humour, discuss the problems of context and shot-order in video and consider influences on the editing process We describe the role of themes and semantic fields in the generation of content oriented video scenes We then present the architecture of AUTEUR, an experimental system that embodies mechanisms to interpret, manipulate and generate video An example of a humorous video sequence generated by AUTEUR is described