scispace - formally typeset
Search or ask a question

Showing papers on "Video compression picture types published in 1992"


Patent
20 Jul 1992
TL;DR: In this article, an operator interface for a video editing system provides a visual sense of the content of video sequences, as well as their length, while also providing enhanced interactive control of locations and time alignments of the video.
Abstract: An operator interface for a video editing system provides a visual sense of the content of video sequences, as well as their length, while also providing enhanced interactive control of locations and time alignments of the video. As the video sequence is processed into the system, a small but representative sample of each frame is saved in a local memory, while the video itself is stored in mass storage. These samples are used to provide a video pictorial timeline of the underlying stored video. The location of an operator's view into the video sequence is controlled by a cursor's movement along a detailed video pictorial timeline, a reverse motion area and a forward motion area to provide VTR control for location changes on the video tape. The cursor's movement can be controlled by a mouse or a knob. Icons, either static or dynamic, are produced within the motion areas to indicate the amount of selected velocity. Timelines can be marked with time marks, roughly aligned and then automatically fine aligned by the system according to their respective time markers. The editing results associated with these timelines are also time aligned as a result of this process.

356 citations


Patent
Cesar A. Gonzales1, Eric Viscito1
23 Oct 1992
TL;DR: In this paper, a system and method for implementing an encoder suitable for use with the proposed ISO/IEC MPEG standards including three cooperating components or subsystems that operate to variously adaptively pre-process the incoming digital motion video sequences, allocate bits to the pictures in a sequence, and adaptively quantize transform coefficients in different regions of a picture in a video sequence so as to provide optimal visual quality given the number of bits allocated to that picture.
Abstract: A system and method are disclosed for implementing an encoder suitable for use with the proposed ISO/IEC MPEG standards including three cooperating components or subsystems that operate to variously adaptively pre-process the incoming digital motion video sequences, allocate bits to the pictures in a sequence, and adaptively quantize transform coefficients in different regions of a picture in a video sequence so as to provide optimal visual quality given the number of bits allocated to that picture.

345 citations


Proceedings ArticleDOI
01 Nov 1992
TL;DR: A video indexing method that uses motion vectors to 'identify' video sequences and corresponding icons is presented, based on the identification of discrete cut points and camera operations made possible by analyzing motion vectors.
Abstract: This paper presents a video indexing method that uses motion vectors to 'identify' video sequences. To visualize and interactively control video sequences, we propose a new video index and corresponding icons. The index is based on the identification of discrete cut points and camera operations made possible by analyzing motion vectors. Simulations and experiments confirm the practicality of the index and icons.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

188 citations


Journal ArticleDOI
TL;DR: The quality of the compressed video with the MPEG algorithm at about 1.5 Mbit/s has been compared to that of consumer grade VCR's and the prediction error signal is further compressed with spatial redundancy reduction (DCT).
Abstract: The video compression technique developed by MPEG covers many applications from interactive systems on CD-ROM to delivery of video information over telecommunications networks. The MPEG video compression algorithm relies on two basic techniques: block based motion compensation for the reduction of the temporal redundancy and transform domain based compression for the reduction of spatial redundancy. Motion compensation techniques are applied with both predictive and interpolative techniques. The prediction error signal is further compressed with spatial redundancy reduction (DCT). The quality of the compressed video with the MPEG algorithm at about 1.5 Mbit/s has been compared to that of consumer grade VCR's.

155 citations


Patent
Raymond Lee Yee1
16 Nov 1992
TL;DR: In this article, a synchronization process in an application program records audio fields with video synchronization counts, and plays back the audio and video fields in synchronism by tracking the video fields against the video sync counts in the audio fields.
Abstract: A synchronization process in an application program records audio fields with video synchronization counts, and plays back the audio and video fields in synchronism by tracking the video fields against the video sync counts in the audio fields. The video sync counts correspond to the number of video field processed when the audio field is processed. During recording of audio and video fields for the multimedia presentation, the video fields are counted. The video field count is appended to and recorded with each audio field. During playback, the system compares the count of video fields displayed against the video field count appended to the audio field being presented. If the counts are different, the system either skips video fields, or repeats video fields to bring the video fields into synchronism with the audio fields.

129 citations


Patent
16 Jul 1992
TL;DR: In this article, an apparatus and method for storing and retrieving synchronized audio/video "filmclips" to and from a data file of a multimedia computer workstation includes a storage means for a workstation to store audio and video data as digital data packets to the data file, and retrieval means for the work station to retrieve audio and visual data from the file.
Abstract: An apparatus and method for storing and retrieving synchronized audio/video "filmclips" to and from a data file of a multimedia computer workstation includes a storage means for a workstation to store audio and video data as digital data packets to the data file, and retrieval means for the workstation to retrieve audio and video data from the data file. The video data is presented as an image on the display of the workstation, while the audio data is sent to either amplified speakers or headphones. An audio data stream is stored to the data file such that the audio data can be retrieved from the data file and reconstructed into a continuous audio signal. The video data is stored to the data file such that each frame of video data is inserted into the stored audio data stream without affecting the continuity of the audio signal reconstructed by the workstation. Timing information is attached to each frame of video data stored to the file, and indicates a point in the continuous audio data stream which corresponds in time to the frame of video data. A synchronizer displays a frame of video data when the point in the audio data stream, corresponding to the timing information of the retrieved video frame is audibly reproduced by the workstation.

111 citations


Patent
20 Oct 1992
TL;DR: In this paper, a system and method of compressing original video data expressed in a plurality of digitally coded frames which enable decompression and playback of resulting compressed video data at one of the plurality of frame rates while maintaining temporal fidelity of the frames displayed is presented.
Abstract: A system and method of compressing original video data expressed in a plurality of digitally coded frames which enable decompression and playback of resulting compressed video data at one of a plurality of frame rates while maintaining temporal fidelity of the frames displayed. Compression includes selecting a plurality of rate streams for the compressed video data, including a highest rate stream including all of the frames of the original video data and a lowest rate stream including a subset of regularly spaced frames of the original video data. Then the initial frame in the original video data is spatially compressed and the resulting compressed data placed in the compressed video data. The initial frame is also saved as a base frame for all rate streams for subsequent temporal compression of the original video data. As frames are retrieved from the original video data in sequence, temporal compression based on frame differencing techniques between the retrieved frame and the base is carried out, with difference frames being stored to the compressed video data. Each difference frame is placed in the resulting compressed video data for later decompression and reproduction.

96 citations


Patent
29 Jan 1992
TL;DR: In this paper, a video image frame area is divided into a set of subframes, and each subframe is systematically shifted such that the individual subframes progressively cycle across and wrap around the frame area.
Abstract: Digital video signals are processed by a plurality of independently operating processors to provide data for transmission in a compressed, motion compensated form. A video image frame area is divided into a set of subframes. The set of subframes is systematically shifted such that the individual subframes progressively cycle across and wrap around the video image frame area. For each successive video frame, video image data bounded by each of the different subframes is independently compressed using motion estimation to reduce data redundancy among the successive frames. The motion estimation is limited for each subframe of a current video frame to areas of a previous video frame that were bounded by the same subframe in the previous frame. In an illustrated embodiment, the set of subframes is shifted once for each successive video frame, and each subframe includes a refresh region whereby the video image frame area is progressively refreshed as the subframes are shifted thereacross. Receiver apparatus for use in decoding the independently processed subframe data is also disclosed.

94 citations


Proceedings ArticleDOI
07 Jan 1992
TL;DR: Experience shows that the editor provides a simple and easy to use, but powerful system for multimedia document preparation, and it can act as a basis for supporting applications such as multimedia mail, electronic distribution of television news and video entertainment, etc.
Abstract: The authors present a window-based editor for manipulating digital video and audio The editor supports real-time recording, playback, and editing (cut, copy, and paste) of several multimedia objects Using the X Window system, the authors have implemented the editor on an environment of Sun SPARCstations, and PC-ATs equipped with video compression hardware The user interface of the multimedia editor consists of a main editing window for each display device, and rope windows, which represent synchronized sequences of digital video and audio being accessed, called ropes Experience shows that the editor provides a simple and easy to use, but powerful system for multimedia document preparation, and it can act as a basis for supporting applications such as multimedia mail, electronic distribution of television news and video entertainment, etc >

70 citations


Proceedings ArticleDOI
01 May 1992
TL;DR: The MPEG video coding standard for the transmission of variable-bit-rate video on asynchronous transfer mode (ATM)-based broadband ISDN is examined and insight was obtained into the cell arrival process to a network for a MPEG video source.
Abstract: The MPEG video coding standard for the transmission of variable-bit-rate video on asynchronous transfer mode (ATM)-based broadband ISDN is examined. The focus is on its use for real-time transmission of broadcast-quality video. The impact of two key parameters, the intraframe to interframe picture ratio and the quantization index that are defined in the standard, on the bit rates per frame was studied. These parameters can be used to control video sources depending on the state of the network. Also, as opposed to previous work which looks only at bit rates per frame, the bits generated per macroblock are studied. This is the basic MPEG coding unit. By packetizing these bits, insight was obtained into the cell arrival process to a network for a MPEG video source. >

66 citations


Patent
Stuart J. Golin1
01 Jun 1992
TL;DR: In this article, an initial analysis of the image data before compression is performed to determine the setting of a compression controller and other compression system thresholds and quantizers, and qualitative information regarding events such as scene changes, brief periods of rapid motion, dissolves, wipes and the appearance of a single anomalous image.
Abstract: In a method of encoding a sequence of images of a digital motion video signal, information regarding future images in the image sequence is obtained by making an initial analysis of the image data before compression. The initial analysis provides information to the compression system regarding variations in complexity between images. This information is used to determine the setting of a compression controller. From this setting, other compression system thresholds and quantizers are scaled. In addition, the initial analysis provides qualitative information regarding events such as scene changes, brief periods of rapid motion, dissolves, wipes, and the appearance of a single anomalous image.

Patent
08 Oct 1992
TL;DR: In this paper, the M×N exclusive-OR plane of pixel change values and location displacement control values for an output pointer into a decompressed video frame is used to encode frame-to-frame differences in an exclusiveOR value.
Abstract: A process for coding a plurality of compressed video data streams in a time ordered sequence. Each compressed data stream includes coding of frame to frame differences of a video segment, which are represented as a compressed M×N exclusive-OR plane of pixel change values and location displacement control values for an output pointer into a decompressed video frame. By coding frame to frame differences in an exclusive-OR values, the replay process is made bidirectional, allowing for both forward and reverse playback of the video segment.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: The results of recent work on a specific adaptive algorithm that provides excellent robustness properties for MPEG-1 video transmitted on either one- or two-tier transmission media are reported.
Abstract: This paper presents an adaptive error concealment technique for MPEG (Moving Picture Experts Group) compressed video. Error concealment algorithms are essential for many practical video transmission scenarios characterized by occasional data loss due to thermal noise, channel impairments, network congestion, etc.. Such scenarios of current importance include terrestrial (simulcast) HDTV, teleconferencing via packet networks, TV/HDTV over fiber-optic ATM (asynchronous transfer mode) systems, etc. In view of the increasing importance of MPEG video for many of these applications, a number of error concealment approaches for MPEG have been developed, and are currently being evaluated in terms of their complexity vs. performance trade-offs. Here, we report the results of recent work on a specific adaptive algorithm that provides excellent robustness properties for MPEG-1 video transmitted on either one- or two-tier transmission media. Receiver error concealment is intended to ameliorate the impact of lost video data by exploiting available redundancy in the decoded picture. The concealment process must be supported by an appropriate transport format which helps to identify the image pixel regions which correspond to lost video data. Once the image region (i.e., macroblocks, slices, etc.) to be concealed are identified, a combination of temporal and spatial replacement techniques may be applied to fill in the lost picture elements. The specific details of the concealment procedure will depend upon the compression algorithm being used, and on the level of algorithmic complexity permissible within the decoder. Simulation results obtained from a detailed end-to-end model that incorporates MPEG compression/decompression and a custom cell-relay (ATM type) transport format are reported briefly.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
24 Mar 1992
TL;DR: In this paper, a video multiplexor encoder and decoder-decoder-converter is proposed for displaying multiple video images in a selected pattern of multiple video windows on a video display device.
Abstract: A video multiplexor-encoder and decoder-converter includes a video multiplexor and encoder for selectively receiving, time-division multiplexing and encoding multiple video signals representing multiple video images for transfer and simultaneous display thereof in a selected pattern of multiple video windows on a video display device, and further includes a decoder and video converter for receiving, decoding and converting an encoded, time-division multiplexed video signal for selective, simultaneous display of the multiple video images in the selected pattern of multiple video windows on a video display device. The encoded, multiplexed video signal includes display control data which selectively represent a position, size and relative visibility priority for each one of the video images within the selected display pattern of multiple video windows.

Patent
23 Oct 1992
TL;DR: In this article, the human eye can more readily discern local image features or artifacts at central image locations or focused-upon areas, while tolerating, to a greater extent, artifacts dispersed elsewhere in the image.
Abstract: Systems and methods that enable digital video compression techniques to manage and control artifact presence in each compressed frame of the video clip. Wherein specific embodiments are applicable to interframe and intraframe video compression methods and can be used in the compression of digital images and digital video clips. Other embodiments are employable in digital video compression and are applicable to interframe compression methods. A mechanism to increase the amount of video compression, while maintaining video quality that may otherwise be sacrificed with such increases in video compression, by threshold value management to accommodate the human eye's ability to more readily discern local image features or artifacts at central image locations or focused-upon areas, while tolerating, to a greater extent, artifacts dispersed elsewhere in the image.

Patent
23 Oct 1992
TL;DR: Hybrid compression processes for digital color video data that enable software only playback of the compressed digital video in low-end computers, wherein intraframe and interframe compression techniques are brought together through a sequence of procedures that analyze local frame regions, integrate unique processes with block truncation coding compression, and adopt the advantages of visual pattern image coding for color video as mentioned in this paper.
Abstract: Hybrid compression processes for digital color video data that enables software only playback of the compressed digital video in low-end computers, wherein intraframe and interframe compression techniques are brought together through a sequence of procedures that analyze local frame regions, integrate unique processes with block truncation coding compression, and adopt the advantages of visual pattern image coding for color video. The process determines the appropriate encoding of each local frame region with one of various compression techniques, based upon its image properties. The compression methods retain the fidelity of the original video data to provide high quality video during decompression and reconstruction of high motion and textured video clips, while simultaneously providing sufficient compression and ease of decoding for software-only decompression thereby exhibiting properties that enable good quality video to be displayed in low-end computers.

Patent
21 Oct 1992
TL;DR: In this paper, a personal computer system is operated to concurrently execute threads of multitasking operations to capture motion video data from a video source, compress such data, and record the compressed data in a file.
Abstract: A personal computer system is operated to concurrently execute threads of multitasking operations to capture motion video data from a video source, compress such data, and record the compressed data in a file. Compression is selectively done in either one of two modes, an inter-frame compression mode and a intra-frame compression mode, both modes being block-oriented. During intra-frame compression, homogenous blocks are used to represent four pixel values with a single pixel value when the four pixels in a block are perceptually similar. During inter-frame compression, unchanged blocks are used to represent four pixel values as unchanged from the preceding frame when the four pixels are perceptually similar to the same four pixels in the preceding frame. Additionally, inter-frame compressed video frames use homogenous blocks to represent four pixel values with a single pixel when the four pixel values in a block are perceptually similar to each other but are perceptually different from the same four pixels in the previous frame.

Journal ArticleDOI
TL;DR: This paper proposes a new methodology (called TES) for modeling the frame bitrate stream generated by compressed video sources, and shows that this model can be used to address a number of design issues that arise in this class of problem.

Patent
12 May 1992
TL;DR: In this paper, a 3-2 pulldown frame convertor is used to convert signals representing input video frames, having a lower associated video frame image rate, to signals representing output video frame images having a higher associated videoframe image rate.
Abstract: A film-to-video frame image convertor includes a 3-2 pulldown frame convertor for converting signals representing input film frame images, having a lower associated film frame image rate, to signals representing output video frame images having a higher associated video frame image rate. The output video frame images consist of genuine and simulated video frame images, which correspond to actual input film frame images and multiple input film frame images, respectively, in accordance with a 3-2 film-to-video frame pulldown. Each genuine video frame image consists of two video field images corresponding to two actual film field images from the same film frame image. Each simulated video frame image consists of two video field images corresponding to two actual film field images from different film frame images, with one of the two video field images being a duplicate of a video field image in an adjacent video frame image. Identification signals are selectively inserted into the vertical blanking interval of some of the output video frame images to identify which ones are simulated video frame images containing duplicate video field images. This allows the duplicate video field images to be identified and selectively deleted when the video frame images, having the higher associated video frame image rate, are to be reconverted to film frame images having the lower associated film frame image rate.

Proceedings ArticleDOI
02 Jan 1992
TL;DR: It is pointed out that the Integrated Information Technology (IIT) Vision Processor (VP) and Vision Controller (VC) chips provide a flexible, programmable solution capable of executing H.261, MPEG and JPEG.
Abstract: The emergence of the CCITT H.261, MPEG, and IPEG video compression standards has created the need for hardware capable of executing all of these standards. It is pointed out that the Integrated Information Technology (IIT) Vision Processor (VP) and Vision Controller (VC) chips provide a flexible, programmable solution capable of executing H.261, MPEG and JPEG. The VP is the first programmable video signal processor optimized for algorithms based on the discrete cosine transform (DCT). The VC is a companion chip which includes a RISC (reduced instruction set computer) microcontroller to manage the system data flow. These two chips with memory can act as an H.261 QCIF codec, and H.261 FCIF encoder or decoder, a real-time FCIF JPEG encoder or decoder, and an SIF MPEG decoder. >

Patent
08 Oct 1992
TL;DR: In this paper, a frame-differencing based method for coding and decoding color video data was proposed for real-time, software-only based decompression and playback in low-end personal computers wherein the computational demands required of a computer microprocessor to implement the method are readily met by an Intel 80386SX microprocessor running at 16 Mhz.
Abstract: A frame-differencing based method for coding and decoding color video data suitable for real-time, software-only based decompression and playback in low-end personal computers wherein the computational demands required of a computer microprocessor to implement the method are readily met by microprocessors such as an Intel 80386SX microprocessor running at 16 Mhz. Frame-to-frame differences are detected in a manner analogous to human perception of luminance data, rather than by the differences in the actual numerical video data. This permits greater compression of data without added computational complexity to the decompression process. Image analysis techniques are employed to ameliorate the appearance of the video. A lossless coding method that unifies two separate compressed data entities is used to obtain a greater amount of compression and simultaneously to reduce the computational complexity of the decompression process.

Proceedings ArticleDOI
02 Jan 1992
TL;DR: Techniques based on fractal geometry and the fractal transform yield a method for compressing video images which is independent of screen resolution and aspect ratio and provides incremental improvement over time for slowly varying images.
Abstract: Techniques based on fractal geometry and the fractal transform yield a method for compressing video images which is independent of screen resolution and aspect ratio and provides incremental improvement over time for slowly varying images. It is (nearly) randomly accessible and transmission errors die out over time. Decompression with this system is extremely fast and, in fact, real-time decompression has been realized in software. Fractal video compression has the ability to display a single compressed image at a variety of screen resolutions where the rescaling is intrinsic to the representation. >

Patent
Atul Puri1, Rangarajan Aravind1
05 Nov 1992
TL;DR: In this paper, an adaptive and selective coding of digital signals relating to frames and fields of the video images is proposed to adaptively control the operation of one or more types of circuitry which are used to compress digital video signals so that less bits and slower bit rates can be used to transmit high resolution video images without undue loss of quality.
Abstract: Improved compression of digital signals relating to high resolution video images is accomplished by an adaptive and selective coding of digital signals relating to frames and fields of the video images. Digital video input signals are analyzed and a coding type signal is produced in response to this analysis. This coding type signal may be used to adaptively control the operation of one or more types of circuitry which are used to compress digital video signals so that less bits, and slower bit rates, may be used to transmit high resolution video images without undue loss of quality. For example, the coding type signal may be used to improve motion compensated estimation techniques, quantization of transform coefficients, scanning of video data, and variable word length encoding of the data. The improved compression of digital video signals is useful for video conferencing applications and high definition television, among other things.

Book ChapterDOI
01 Jan 1992
TL;DR: An overview of the location techniques employed, a real-time implementation, and the results of the subjective tests which conftrmed the improvement in picture quality are presented.
Abstract: New video communication and multi-media products open up a range of machine vision applications, in which the potential size of the market can justify a substantial investment in the development of sophisticated algorithms Face location can be used to enhance the subjective performance of videophones, while still conforming with international video compression standards This paper gives an overview of the location techniques employed, describes a red-time implementation, and presents the results of the subjective tests which confirmed the improvement in picture quality

Patent
01 Dec 1992
TL;DR: In this paper, the pixel value sum of the two frames and a pixel value difference between the same and either a resultant interframe calculated output or the first video signal is compressed through two-dimensional orthogonal transformation for ease of storage.
Abstract: When a standard TV signal of digital format is a first video signal and another digitized TV signal having a bandwidth wider than that of the first video signal is a second video signal, every two frames of the second video signal are calculated to produce a pixel value sum of the two frames and a pixel value difference between the same and either a resultant interframe calculated output or the first video signal is compressed through two-dimensional orthogonal transformation for ease of storage.

Book ChapterDOI
01 Jan 1992
TL;DR: This paper addresses the problem of motion-compensated up-conversion of digitized motion pictures at 24 frames per second to 60 frames persecond digital video signals.
Abstract: Frame rate conversion facilitates visual information exchange among systems employing various different frame rates for storage, transmission and display of image video signals. Motion pictures have a temporal rate of 24 frames per second, while most of the conventional video displays and recording devices utilize a rate of 60 fields per second. In the case of high-definition television (HDTV) systems, some proposals in the US require a video rate of 60 frames per second. Up-conversion of motion picture film to 60 frames per second is of importance because programs recorded on motion picture film can be used as high-quality source material for HDTV. In this paper, we address the problem of motion-compensated up-conversion of digitized motion pictures at 24 frames per second to 60 frames per second digital video signals.

Book ChapterDOI
07 Oct 1992
TL;DR: This paper introduces audio/ video, or “AV”, databases and discusses the key problem of data modelling in the context of time-based media.
Abstract: Advances in data compression are creating new possibilities for applications combining digital audio and digital video. These applications, such as desktop authoring environments and educational or training programs, often require access to collections of audio/video material. This paper introduces audio/ video, or “AV”, databases and discusses the key problem of data modelling in the context of time-based media. Extensions needed for modelling basic audio/video structures and relationships are described. These extensions, which include temporal sequences, quality factors, derivation relationships and temporal composition, are applied to an existing audio/video data representation.

Patent
15 Sep 1992
TL;DR: In this article, a system and method of digital video editing simultaneously displays a plurality of source video windows on a screen, each window showing a digital source video stream, and the user can select among the video windows at any time while they are being shown.
Abstract: A system and method of digital video editing simultaneously displays a plurality of source video windows on a screen, each window showing a digital source video stream. The user may select among the source video windows at any time while they are being shown. The selected digital source video stream appears in a record video window, and continues to run in real-time in both the selected source video window and the record video window. All windows continuously display real-time video streams. The user may continue to make selections among the source video windows as often as desired, and may also select transformations, or special effects, from an on-screen list. The video stream playing in the record window thus forms a digital user-arranged version of selected portions of the source video streams, plus transformations, as selected by the user. The user's selections are stored to permit subsequent playback of the digital user-arranged video stream in a playback window. A single digital audio stream, or selectable multiple digital audio streams, may accompany the source and user-arranged video streams.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: The framework proposed for the ongoing second phase of Motion Picture Experts Group (MPEG-2) standard is employed to study the performance of one frequency domain scheme and investigate improvements aimed at increasing its efficiency.
Abstract: Scalable video coding is important in a number of applications where video needs to be decoded anddisplayed at a variety of resolution scales. It is more efficient than simulcasting, in which all desired resolution scales are coded totally independent of one another within the constraint of a fixed available bandwidth. In this paper, we focus on scalability using the frequency domain approach. We employ the frameworkproposed for the ongoing second phase of Motion Picture Experts Group (MPEG-2) standard to study theperformance of one such scheme and investigate improvements aimed at increasing its efficiency. Practicalissues related to multiplexing of encoded data of various resolution scales to facilitate decoding are considered. Simulations are performed to investigate the potential of a chosen frequency domain scheme. Various prospectsand limitations are also discussed. 1. INTRODUCTION Much of the recent work on video compression has focused on improving performance of video codingschemes consisting of a single layer [1,2]. There are, however, a range of applications where video needs to bedecoded and displayed at a variety of resolution scales. Among the noteworthy applications of interest [3,5] aremulti-point video conferencing, windowed display on workstations, video communications on asynchronoustransfer mode (ATM) networks and FIDTV with embedded standard TV. In the light of abundance of applica-tions that may benefit from multi-resolution video, the second phase of Motion Pictures Experts Group

Proceedings ArticleDOI
24 Mar 1992
TL;DR: The paper describes how fractal coding theory may be applied to compress video images using an image resampling sequencer (IRS) in a video compression system on a modular image processing system.
Abstract: The paper describes how fractal coding theory may be applied to compress video images using an image resampling sequencer (IRS) in a video compression system on a modular image processing system. It describes the background theory of image (image) coding using a form of fractal equation known as iterated function system (IFS) codes. The second part deals with the modular image processing system on which to implement these operations. It briefly covers how IFS codes may be calculated. It is shown how the IRS and 2/sup nd/ order geometric transformations may be used to describe inter-frame changes to compress motion video. >