scispace - formally typeset
Search or ask a question

Showing papers on "Smacker video published in 1992"


Patent
20 Jul 1992
TL;DR: In this article, an operator interface for a video editing system provides a visual sense of the content of video sequences, as well as their length, while also providing enhanced interactive control of locations and time alignments of the video.
Abstract: An operator interface for a video editing system provides a visual sense of the content of video sequences, as well as their length, while also providing enhanced interactive control of locations and time alignments of the video. As the video sequence is processed into the system, a small but representative sample of each frame is saved in a local memory, while the video itself is stored in mass storage. These samples are used to provide a video pictorial timeline of the underlying stored video. The location of an operator's view into the video sequence is controlled by a cursor's movement along a detailed video pictorial timeline, a reverse motion area and a forward motion area to provide VTR control for location changes on the video tape. The cursor's movement can be controlled by a mouse or a knob. Icons, either static or dynamic, are produced within the motion areas to indicate the amount of selected velocity. Timelines can be marked with time marks, roughly aligned and then automatically fine aligned by the system according to their respective time markers. The editing results associated with these timelines are also time aligned as a result of this process.

356 citations


Proceedings ArticleDOI
01 Nov 1992
TL;DR: A video indexing method that uses motion vectors to 'identify' video sequences and corresponding icons is presented, based on the identification of discrete cut points and camera operations made possible by analyzing motion vectors.
Abstract: This paper presents a video indexing method that uses motion vectors to 'identify' video sequences. To visualize and interactively control video sequences, we propose a new video index and corresponding icons. The index is based on the identification of discrete cut points and camera operations made possible by analyzing motion vectors. Simulations and experiments confirm the practicality of the index and icons.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

188 citations


Patent
22 May 1992
TL;DR: In this article, a system for simultaneously using a video signal to provide a video picture and computer data is presented, where the computer can be controlled by the computer data as the video picture is being displayed.
Abstract: A system for simultaneously using a video signal to provide a video picture and computer data. At the transmitting end, a video signal is digitized and then modified by substituting digital signals representative of computer data for those representative of video pixels. The modified signal is reconverted to an analog signal and transmitted to a receiver. The receiver displays the video picture corresponding to the modified signal and extracts the computer data so that they may be provided to the computer. The computer can thus be controlled by the computer data as the video picture is being displayed.

135 citations


Patent
Raymond Lee Yee1
16 Nov 1992
TL;DR: In this article, a synchronization process in an application program records audio fields with video synchronization counts, and plays back the audio and video fields in synchronism by tracking the video fields against the video sync counts in the audio fields.
Abstract: A synchronization process in an application program records audio fields with video synchronization counts, and plays back the audio and video fields in synchronism by tracking the video fields against the video sync counts in the audio fields. The video sync counts correspond to the number of video field processed when the audio field is processed. During recording of audio and video fields for the multimedia presentation, the video fields are counted. The video field count is appended to and recorded with each audio field. During playback, the system compares the count of video fields displayed against the video field count appended to the audio field being presented. If the counts are different, the system either skips video fields, or repeats video fields to bring the video fields into synchronism with the audio fields.

129 citations


Patent
06 Feb 1992
TL;DR: In this article, a video game system is configured so that when a player speaks, a video object representing the player in the video game is synchronized with the player's speech in real-time.
Abstract: The video game system is configured so that when a player speaks, a video object representing the player in the video game is synchronized with the player's speech in real-time. The audio output is transmitted from the video display unit and is thus associated with having come from the image rather than from the player. The synchronization is accomplished by matching the loudness of syllables in the player's speech with the facial expression of the video object. This video game system includes an audio input means (18) for receiving audio input (19) from a player as well as a video display (10) for displaying video images. Further, the video system includes a data processing means (38) that is programmed to generate and coordinate the activity of the video game. Each player may be provided with a headset (14) that includes a microphone (18) and earphones (16) to facilitate player interaction and interaction with the video game system. The video game system may also include a distortion means for distorting the audio output to reflect the nature of a player's video object. The video game system also provides a method for storing (52, 54, 60) video/audio information for retrieval and play back.

111 citations


Patent
16 Jul 1992
TL;DR: In this article, an apparatus and method for storing and retrieving synchronized audio/video "filmclips" to and from a data file of a multimedia computer workstation includes a storage means for a workstation to store audio and video data as digital data packets to the data file, and retrieval means for the work station to retrieve audio and visual data from the file.
Abstract: An apparatus and method for storing and retrieving synchronized audio/video "filmclips" to and from a data file of a multimedia computer workstation includes a storage means for a workstation to store audio and video data as digital data packets to the data file, and retrieval means for the workstation to retrieve audio and video data from the data file. The video data is presented as an image on the display of the workstation, while the audio data is sent to either amplified speakers or headphones. An audio data stream is stored to the data file such that the audio data can be retrieved from the data file and reconstructed into a continuous audio signal. The video data is stored to the data file such that each frame of video data is inserted into the stored audio data stream without affecting the continuity of the audio signal reconstructed by the workstation. Timing information is attached to each frame of video data stored to the file, and indicates a point in the continuous audio data stream which corresponds in time to the frame of video data. A synchronizer displays a frame of video data when the point in the audio data stream, corresponding to the timing information of the retrieved video frame is audibly reproduced by the workstation.

111 citations


Patent
Hidetoshi Mishima1
16 Jan 1992
TL;DR: In this paper, the number of quantization bits used in a quantizing circuit is determined on the basis of the activity index of each video block and the length of coded data.
Abstract: A video signal encoding apparatus in which the number of quantization bits used in a quantizing circuit is determined on the basis of the activity index of each video block and the length of coded data or alternatively on the basis of the activity index of each video block and the number of events (each event consisting of the zero run length and nonzero value of quantized data). The video signal is encoded in a compressed form after shuffling the video blocks in such a manner that, when attention is given to any given video block, its four neighboring video blocks belong to units different from the unit to which the attention video block belong. Compression encoding of the video blocks is performed in sequence starting with the center of the screen and then proceeding toward the sides of the screen.

96 citations


Patent
Jens Bodenkamp1, Mark D. Atkins1
19 Jun 1992
TL;DR: In this article, an enhanced single frame buffer video display system is described for combining both video and graphical images, which stores a single data format for pixel types which may be interpreted by a conventional video generator for output to conventional color graphics computer display devices.
Abstract: An enhanced single frame buffer video display system is described for combining both video and graphical images. A single frame buffer is implemented which stores a single data format for pixel types which may be interpreted by a conventional video generator for output to conventional color graphics computer display devices. The system utilizes an enhanced graphics controller which does all pixel processing for translating all incoming graphics and video data to a single format type as well as performing, blending and scaling. The system is readily scalable for handling additional format data types.

92 citations


Patent
08 Dec 1992
TL;DR: A digital video signal converting apparatus for converting a first digital signal having a first resolution to a second digital signal with a second resolution higher than the first resolution, comprises; block segmentation circuit, memory having a mapping table stored therein and having address terminals to which the first digital video signals in a block format are supplied, and output terminals from which the second digital signals in block format is output, and block separation circuit, wherein the mapping table in the memory is generated by training utilizing a plurality of images as mentioned in this paper.
Abstract: A digital video signal converting apparatus for converting a first digital video signal having a first resolution to a second digital video signal having a second resolution higher than the first resolution, comprises; block segmentation circuit for converting the first digital video signal into a block format, memory having a mapping table stored therein and having address terminals to which the first digital video signal in a block format is supplied and output terminals from which the second digital video signal in block format is output, and block separation circuit for converting the second digital video signal in a block format into a digital video signal in a raster scan order, wherein the mapping table in the memory is generated by training utilizing a plurality of images the training step being performed by generating first and second digital video signal corresponding to each of the plurality of images, converting each of the first and second digital video signals into a block format, and selecting the first digital video signal in a block format is an address signal for the mapping table and inputting the second digital video signal in a block format to a memory area corresponding to the address, and generating data of the mapping table from the signal stored in the memory area.

90 citations


Proceedings ArticleDOI
01 May 1992
TL;DR: The MPEG video coding standard for the transmission of variable-bit-rate video on asynchronous transfer mode (ATM)-based broadband ISDN is examined and insight was obtained into the cell arrival process to a network for a MPEG video source.
Abstract: The MPEG video coding standard for the transmission of variable-bit-rate video on asynchronous transfer mode (ATM)-based broadband ISDN is examined. The focus is on its use for real-time transmission of broadcast-quality video. The impact of two key parameters, the intraframe to interframe picture ratio and the quantization index that are defined in the standard, on the bit rates per frame was studied. These parameters can be used to control video sources depending on the state of the network. Also, as opposed to previous work which looks only at bit rates per frame, the bits generated per macroblock are studied. This is the basic MPEG coding unit. By packetizing these bits, insight was obtained into the cell arrival process to a network for a MPEG video source. >

66 citations


Patent
12 Mar 1992

Patent
24 Sep 1992
TL;DR: In this article, a computer-based system for generating a video edit decision list, which tabulates video editing events and video synchronization points corresponding to them, is presented, based on a sequence of video and audio manipulations produced by a digital video editing system.
Abstract: A computer-based system for generating a video edit decision list, which tabulates video editing events and video synchronization points corresponding to the video editing events. The invention accepts a sequence of video and audio manipulations produced by a digital video editing system, each manipulation effecting a particular video editing event, and generates, based on the manipulation sequence, a list of video editing events and corresponding synchronization points. The invention then conforms the list to a user-specified format selected from a plurality of video edit decision list format templates, provided by the system, which each specify a model for defining video editing events distinctly in that format, and then the video edit decision list is output in the user-specified format. The invention is adapted to also convert a video edit decision list from a first format to a second, user-specified format; and further is adapted to generate a sequence of video and audio manipulations to be used by a digital video editor for editing a video, based on a video edit decision list.

Patent
24 Mar 1992
TL;DR: In this paper, a video multiplexor encoder and decoder-decoder-converter is proposed for displaying multiple video images in a selected pattern of multiple video windows on a video display device.
Abstract: A video multiplexor-encoder and decoder-converter includes a video multiplexor and encoder for selectively receiving, time-division multiplexing and encoding multiple video signals representing multiple video images for transfer and simultaneous display thereof in a selected pattern of multiple video windows on a video display device, and further includes a decoder and video converter for receiving, decoding and converting an encoded, time-division multiplexed video signal for selective, simultaneous display of the multiple video images in the selected pattern of multiple video windows on a video display device. The encoded, multiplexed video signal includes display control data which selectively represent a position, size and relative visibility priority for each one of the video images within the selected display pattern of multiple video windows.

Patent
21 Oct 1992
TL;DR: In this paper, an input interlaced video signal is converted in any of a variety of ways to a progressive scan format video signal, and then the interlaces fields of the frames are displayed on a video monitor or view finder.
Abstract: In order to simulate, on set or on location, a subsequent video to film conversion process, an input interlaced video signal is converted in any of a variety of ways to a progressive scan format video signal, and then interlaced fields of the progressive scan format frames are displayed on a video monitor or view finder.

Patent
30 Jun 1992
TL;DR: In this paper, a method for merging first and second digital video signals generated by first (32) and second (40) video controllers, respectively, for merged transmission to a digital video decoder is presented.
Abstract: A method for merging first and second digital video signals generated by first (32) and second (40) video controllers, respectively, for merged transmission to a digital video decoder. The first video controller (32) transmits the first digital video signal to the decoder while monitoring the signal of a luminance component which designates the boundary (75) between a first image (74) constructable from the first video signal and a second image (76) constructable from the second video signal. When the luminance component is detected, a colorkey signal is generated by the first video controller (32) and transmitted to the second video controller (40) to initiate transmission of the second digital video signal to the decoder in place of the first digital video signal. The first video controller continues to monitor the first video signal until the absence of the luminance component is detected.

Patent
11 Feb 1992
TL;DR: In this article, a video file server 20 includes both a random access data storage subsystem 78 and a archive storage subsystem 82 for storing compressed video data, and compression-decompression cards 42 included in the video file servers 20 provide authoring capability to store compressed video and audio data in the storage subsystems.
Abstract: The technical field of the invention generally concerns systems for interactive access to stored video data. In particular, a video file server 20 includes both a random access data storage subsystem 78 and a archive data storage subsystem 82 for storing compressed video data. In response to commands from subscriber system 66, the video file server 20 transmits compressed video data to the subscriber systems 66 over lines 64A-64H, or receives compressed video data therefrom. Commands from the subscriber systems 66 may cause the video file server 20 to store compressed video data received from the subscriber systems 66 in the random access data storage subsystem 78 and/or archive data storage subsystem 82. Compression-decompression cards 42 included in the video file server 20 provide an authoring capability for storing compressed video and/or audio data in the random access data storage subsystem 78 and/or archive data storage subsystem 82, and for converting from one data compression standard to another.

Patent
30 Apr 1992
TL;DR: In this paper, the authors used a stable (crystal oscillator) time base clock to reconstruct the frequency of the video signal and then used a contrast optimization process to determine the pixel clock rate.
Abstract: Apparatus and method are provided which receive and sample an incoming video image signal asynchronously, and then processes the signal to recover the video image, including video format, for conversion into a preselected video format. The apparatus and methods first sample the video signal using a stable (crystal oscillator) time base clock to reconstruct the frequency of the video signal, i.e., to recover the video format and then using a contrast optimization process to determine the video signal pixel clock rate.

Patent
05 Nov 1992
TL;DR: In this paper, a high definition television transmitting system includes a source of high definition analog RGB video signals which are converted to corresponding digitally encoded signals and thereafter compressed by a video encoder to a six megahertz bandwidth.
Abstract: A high definition television transmitting system includes a source of high definition analog RGB video signals which are converted to corresponding digitally encoded signals and thereafter compressed by a video encoder to a six megahertz bandwidth. The compressed data is formatted by a transmitter and processed in accordance with a Reed-Solomon error control system for broadcast as an NTSC-type broadcast signal. A high definition television receiver includes a high definition signal receiver coupled to the transmitter by a transmission link. The receiver extracts the compressed video, audio and ancillary data signals and applies a Reed-Solomon error correction thereto. The compressed video data is reconstructed by a video decoder and processed for high definition display. A high definition digital video tape recorder is coupled to the transmitter by an interface and format converter facilitating both recording and playback functions of the digital video tape recorder while maintaining the Reed-Solomon error control. The format converter provides compatibility between the high definition transmitter and the digital video tape recorder.

Patent
Masami Harigai1, Hiroyasu Shindou1
24 Mar 1992
TL;DR: In this article, an encoder disconnects video data between a video source and a video using device only during 21H in the vertical blanking interval, during which the encoder applies locally generated coded data signals, in a format detectable by conventional data decoding devices, to the video using devices.
Abstract: An encoder disconnects video data between a video source and a video using device only during 21H in the vertical blanking interval. During 21H, it applies locally generated coded data signals, in a format detectable by conventional data decoding devices, to the video using device. The video source may be a camera or tape playback, and the using device may be a TV set or a video tape recorder.

Patent
01 Dec 1992
TL;DR: In this paper, the pixel value sum of the two frames and a pixel value difference between the same and either a resultant interframe calculated output or the first video signal is compressed through two-dimensional orthogonal transformation for ease of storage.
Abstract: When a standard TV signal of digital format is a first video signal and another digitized TV signal having a bandwidth wider than that of the first video signal is a second video signal, every two frames of the second video signal are calculated to produce a pixel value sum of the two frames and a pixel value difference between the same and either a resultant interframe calculated output or the first video signal is compressed through two-dimensional orthogonal transformation for ease of storage.

Book ChapterDOI
07 Oct 1992
TL;DR: This paper introduces audio/ video, or “AV”, databases and discusses the key problem of data modelling in the context of time-based media.
Abstract: Advances in data compression are creating new possibilities for applications combining digital audio and digital video. These applications, such as desktop authoring environments and educational or training programs, often require access to collections of audio/video material. This paper introduces audio/ video, or “AV”, databases and discusses the key problem of data modelling in the context of time-based media. Extensions needed for modelling basic audio/video structures and relationships are described. These extensions, which include temporal sequences, quality factors, derivation relationships and temporal composition, are applied to an existing audio/video data representation.

Patent
15 Sep 1992
TL;DR: In this article, a system and method of digital video editing simultaneously displays a plurality of source video windows on a screen, each window showing a digital source video stream, and the user can select among the video windows at any time while they are being shown.
Abstract: A system and method of digital video editing simultaneously displays a plurality of source video windows on a screen, each window showing a digital source video stream. The user may select among the source video windows at any time while they are being shown. The selected digital source video stream appears in a record video window, and continues to run in real-time in both the selected source video window and the record video window. All windows continuously display real-time video streams. The user may continue to make selections among the source video windows as often as desired, and may also select transformations, or special effects, from an on-screen list. The video stream playing in the record window thus forms a digital user-arranged version of selected portions of the source video streams, plus transformations, as selected by the user. The user's selections are stored to permit subsequent playback of the digital user-arranged video stream in a playback window. A single digital audio stream, or selectable multiple digital audio streams, may accompany the source and user-arranged video streams.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: The framework proposed for the ongoing second phase of Motion Picture Experts Group (MPEG-2) standard is employed to study the performance of one frequency domain scheme and investigate improvements aimed at increasing its efficiency.
Abstract: Scalable video coding is important in a number of applications where video needs to be decoded anddisplayed at a variety of resolution scales. It is more efficient than simulcasting, in which all desired resolution scales are coded totally independent of one another within the constraint of a fixed available bandwidth. In this paper, we focus on scalability using the frequency domain approach. We employ the frameworkproposed for the ongoing second phase of Motion Picture Experts Group (MPEG-2) standard to study theperformance of one such scheme and investigate improvements aimed at increasing its efficiency. Practicalissues related to multiplexing of encoded data of various resolution scales to facilitate decoding are considered. Simulations are performed to investigate the potential of a chosen frequency domain scheme. Various prospectsand limitations are also discussed. 1. INTRODUCTION Much of the recent work on video compression has focused on improving performance of video codingschemes consisting of a single layer [1,2]. There are, however, a range of applications where video needs to bedecoded and displayed at a variety of resolution scales. Among the noteworthy applications of interest [3,5] aremulti-point video conferencing, windowed display on workstations, video communications on asynchronoustransfer mode (ATM) networks and FIDTV with embedded standard TV. In the light of abundance of applica-tions that may benefit from multi-resolution video, the second phase of Motion Pictures Experts Group

Patent
13 Mar 1992
TL;DR: In this article, a process for generating a series of video images (3a, 3b, 3c, 3d) on a display (3) is described, where video signals corresponding to each video image are combined to supply a resulting video signal which is then applied to display means (2), especially a video projector, associated with the display.
Abstract: Process for generating a series of video images (3a, 3b, 3c, 3d) on a display (3). Video signals corresponding to each video image (3a, 3b, 3c, 3d) are combined to supply a resulting video signal which is then applied to display means (2), especially a video projector, associated with the display (3), said images (3a, 3b, 3c, 3d) occupying within the latter (3) complementary spaces, the contours of which may be modified. For use especially in communication, advertising, and public and home audiovisual systems.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: A method for extracting a hierarchical structure of video sequences in the time domain as a means for video browsing and a hierarchical segmentation method based on the descriptive method are described.
Abstract: This report describes a method for extracting a hierarchical structure of video sequences in the time domain as a means for video browsing. First, a descriptive method for video content is proposed. Next, a hierarchical segmentation method based on the descriptive method is discussed and results of an experiment are also introduced. The experiment has proved the effectiveness of the proposed methods.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
11 Dec 1992
TL;DR: A digital video signal converting apparatus for converting a first digital signal having a first resolution to a second digital signal with a second resolution higher than the first resolution, comprises; block segmentation circuit, memory having a mapping table stored therein and having address terminals to which the first digital video signals in a block format are supplied, and output terminals from which the second digital signals in block format is output, and block separation circuit, wherein the mapping table in the memory is generated by training utilizing a plurality of images.
Abstract: A digital video signal converting apparatus for converting a first digital video signal having a first resolution to a second digital video signal having a second resolution higher than the first resolution, comprises; block segmentation circuit for converting the first digital video signal into a block format, memory having a mapping table stored therein and having address terminals to which the first digital video signal in a block format is supplied and output terminals from which the second digital video signal in block format is output, and block separation circuit for converting the second digital video signal in a block format into a digital video signal in a raster scan order, wherein the mapping table in the memory is generated by training utilizing a plurality of images the training step being performed by generating first and second digital video signal corresponding to each of the plurality of images, converting each of the first and second digital video signals into a block format, and selecting the first digital video signal in a block format is an address signal for the mapping table and inputting the second digital video signal in a block format to a memory area corresponding to the address, and generating data of the mapping table from the signal stored in the memory area.


Proceedings ArticleDOI
03 May 1992
TL;DR: Experience with the Digital Video Interactive (DVIB) architecture is presented and classes of algorithms and implementations suiting the demands of a medical image network are outlined.
Abstract: Video compression algorithms and cornmcrcial VLSI are surveyed. The emphasis is on hybrid and standard compression schemes. Classes of algorithms and implementations suiting the demands of a medical image network are outlined. Experience with the Digital Video Interactive (DVIB) architecture is presented.

Journal ArticleDOI
E. Petajan1
TL;DR: A noncritical overview of the video coding techniques used in the four proposed digital HDTV systems, that provides basic descriptions of the discrete cosine transform, motion estimation and compensation, adaptive quantization, variable-length coding, and compressed video data formatting is presented.
Abstract: Motion-compensated transform coding is proposed for video compression in systems for the US terrestrial broadcast of high-definition television (HDTV). A noncritical overview of the video coding techniques used in the four proposed digital HDTV systems, that provides basic descriptions of the discrete cosine transform, motion estimation and compensation, adaptive quantization, variable-length coding, and compressed video data formatting is presented. >

Journal ArticleDOI
01 Aug 1992
TL;DR: The new two-layered video compression system provides the capability of encoding a higher resolution video sequence with a small modification outside the existing compression hardware.
Abstract: The authors describe a new multipurpose two-layered video compression system. The new two-layered video compression system provides the capability of encoding a higher resolution video sequence with a small modification outside the existing compression hardware. The techniques used were subsampling and expansion combined with differential pulse code modulation (DPCM) coding of the detail information lost in the subsampling process. Quadtree coding was used to encode the location of the regions to which the details were added. Simulation results of the two-layered system are included. >