scispace - formally typeset
Search or ask a question

Showing papers on "Video quality published in 1995"


Book
01 Aug 1995
TL;DR: Digital Video Processing, Second Edition, reflects important advances in image processing, computer vision, and video compression, including new applications such as digital cinema, ultra-high-resolution video, and 3D video.
Abstract: Over the years, thousands of engineering students and professionals relied on Digital Video Processing as the definitive, in-depth guide to digital image and video processing technology. Now, Dr. A. Murat Tekalp has completely revamped the first edition to reflect todays technologies, techniques, algorithms, and trends. Digital Video Processing, Second Edition, reflects important advances in image processing, computer vision, and video compression, including new applications such as digital cinema, ultra-high-resolution video, and 3D video. This edition offers rigorous, comprehensive, balanced, and quantitative coverage of image filtering, motion estimation, tracking, segmentation, video filtering, and compression. Now organized and presented as a true tutorial, it contains updated problem sets and new MATLAB projects in every chapter. Coverage includes Multi-dimensional signals/systems: transforms, sampling, and lattice conversion Digital images and video: human vision, analog/digital video, and video quality Image filtering: gradient estimation, edge detection, scaling, multi-resolution representations, enhancement, de-noising, and restoration Motion estimation: image formation; motion models; differential, matching, optimization, and transform-domain methods; and 3D motion and shape estimation Video segmentation: color and motion segmentation, change detection, shot boundary detection, video matting, video tracking, and performance evaluation Multi-frame filtering: motion-compensated filtering, multi-frame standards conversion, multi-frame noise filtering, restoration, and super-resolution Image compression: lossless compression, JPEG, wavelets, and JPEG2000 Video compression: early standards, ITU-T H.264/MPEG-4 AVC, HEVC, Scalable Video Compression, and stereo/multi-view approaches

1,350 citations


Proceedings ArticleDOI
Behzad Shahraray1
17 Apr 1995
TL;DR: In this article, the authors proposed a method for detecting abrupt and gradual scene changes in video sequences by sampling video frames from a small subset of the video frames and detecting the points at which the contextual information in video frames change significantly.
Abstract: Digital images and image sequences (video) are a significant component of multimedia information systems, and by far the most demanding in terms of storage and transmission requirements. Content-based temporal sampling of video frames is proposed as an efficient method for representing the visual information contained in the video sequence by using only a small subset of the video frames. This involves the identification and retention of frames at which the contents of the scene is `significantly' different from the previously retained frames. It is argued that the criteria used to measure the significance of a change in the contents of the video frames are subjective, and performing the task of content-based sampling of image sequences, in general, requires a high level of image understanding. However, a significant subset of the points at which the contextual information in the video frames change significantly can be detected by a `scene change detection' method. The definition of a scene change is generalized to include not only the abrupt transitions between shots, but also gradual transitions between shots resulting from video editing modes, and inter-shot changes induced by camera operations. A method for detecting abrupt and gradual scene changes is discussed. The criteria for detecting camera-induced scene changes from camera operations are proposed. Scene matching is proposed as a means of achieving further reductions in the storage and transmission requirements.

275 citations


Journal ArticleDOI
TL;DR: The ITU near term standard for very low bitrate video coding, H.263 (ITU-T SG 15/1 Rapporteurs Group for Very Low Bitrate Visual Telephony, 1995), is described and a long term activity is planned by ITU for the development of a new video coding algorithm with a considerable better picture quality than H. 263.
Abstract: The ITU near term standard for very low bitrate video coding, H.263 (ITU-T SG 15/1 Rapporteurs Group for Very Low Bitrate Visual Telephony, 1995), is described. Both QCIF and a sub-QCIF format (128 × 96) are mandatory picture formats for the decoder; the CIF picture format is optional. The H.263 algorithm consists of a mandatory core algorithm and four negotiable options. With H.263 a significantly better picture quality than with H.261 can be achieved, depending on the content of the video scene and the coding parameters. Also, the cost of the H.263 video codec can be kept low if only the minimum required is implemented. The negotiable options of H.263 increase the complexity of the video codec, but also significantly improve the picture quality. H.263 is part of a set of recommendations for a very low bitrate audio visual terminal that was frozen in January 1995 and is based on existing technology. A long term activity is planned by ITU for the development of a new video coding algorithm (H.263/L) with a considerable better picture quality than H.263. This standard will be developed in joint co-operation with MPEG4.

107 citations


Proceedings ArticleDOI
17 Apr 1995
TL;DR: This paper focuses on the video software, which gives an example of a fully compliant implementation of the standard and of a good video quality encoder, and serves as a tool for compliance testing.
Abstract: Part 5 of the International Standard ISO/IEC 13818 `Generic Coding of Moving Pictures and Associated Audio' (MPEG-2) is a Technical Report, a sample software implementation of the procedures in parts 1, 2 and 3 of the standard (systems, video, and audio). This paper focuses on the video software, which gives an example of a fully compliant implementation of the standard and of a good video quality encoder, and serves as a tool for compliance testing. The implementation and some of the development aspects of the codec are described. The encoder is based on Test Model 5 (TM5), one of the best, published, non-proprietary coding models, which was used during MPEG-2 collaborative stage to evaluate proposed algorithms and to verify the syntax. The most important part of the Test Model is controlling the quantization parameter based on the image content and bit rate constraints under both signal-to-noise and psycho-optical aspects. The decoder has been successfully tested for compliance with the MPEG-2 standard, using the ISO/IEC MPEG verification and compliance bitstream test suites as stimuli.

100 citations


Journal ArticleDOI
TL;DR: To define a hypermedia system's ease of use from the user's point of view, an interface shallowness metric, a downward compactness metric, and a downward navigability metric are proposed that express both the cognitive load on users and the structural complexity of the hypermedia contents.
Abstract: To define a hypermedia system's ease of use from the user's point of view, we propose three evaluation metrics: an interface shallowness metric, a downward compactness metric, and a downward navigability metric. These express both the cognitive load on users and the structural complexity of the hypermedia contents. We conducted a field study at the National Museum of Ethnology (NME) in Osaka, Japan, to evaluate our hypermedia system and to assess the suitability of our hypermedia metrics from the viewpoint of visiting members of the public. After developing a spreadsheet-type authoring system named HyperEX, we built prototype systems for use by members of the public visiting a special exhibition held at the museum. Questionnaires, interviews, automatic recording of users' navigation operations, and statistical analysis of 449 tested users yielded the following results. First, the suitability of the metrics was found to be satisfactory, indicating that they are useful for developing hypermedia systems. Second, there is a strong relationship between a system's enjoyability and its usability. Transparency and the friendliness of the user interface are the key issues in enjoyability. Finally, the quality of the video strongly affects the overall system evaluation. Video quality is determined by optimum selection of scenes, the length of the video, and appropriate audio-visual expression of the content. This video quality may become the most important issue in developing hypermedia for museum education.

53 citations


Proceedings ArticleDOI
04 Jul 1995
TL;DR: In MPEG-2 coding of television pictures, buffer overflow occurs during motion-intensive scenes; this introduces time-varying levels of coding distortion into the video, which affects the subjective effect of such distortion on viewers.
Abstract: In MPEG-2 coding of television pictures, buffer overflow occurs during motion-intensive scenes; this introduces time-varying levels of coding distortion into the video. We report on the subjective effect of such distortion on viewers. It is known that standard ITU-R subjective testing methodology, which uses 10 s presentation times, is not easily able to deal with such variations in quality. Most of the applications of this methodology have been to situations where there is a single level of quality associated with a fixed set of engineering transmission parameters. However, if MPEG video is coded at a constant bit rate which is low enough to cause distortion, the quality of successive 10 s sections can be quite different from each other. At present, there is little reported experimental work concerning such quality variations in digitally-compressed television, although a study has been made of the time-varying distortions that occur in ATM video transmission due to sporadic network overload.

51 citations


Proceedings ArticleDOI
28 Apr 1995
TL;DR: Experimental data shows the peak-detection algorithm based on the median filter is effective even for video with significant motion or sudden light changes such as in action movies, or sports video.
Abstract: This paper presents a peak-detection algorithm based on the median filter. When used with difference or correlation measures between contiguous video frames, this algorithm can determine significant peaks at "abrupt scene changes" in MPEG compressed video. Experimental data shows the algorithm is effective even for video with significant motion or sudden light changes such as in action movies, or sports video.

41 citations


Journal ArticleDOI
TL;DR: This paper studies the performance of an unsupervised MPEG2 decoder in the presence of bit errors and cell loss, where errors are detected by the decoder solely without requesting any error information from external devices.

38 citations


Patent
12 Oct 1995
TL;DR: In this paper, the authors present a video quality evaluating equipment for a reproduced image of a video signal subject to digital compression capable of economically evaluating the video quality in a short period of time.
Abstract: A sync controller 14 controls an amount of delay of a delay part 15 so that original video data entered from a video source 1 is synchronized with reproduced video data which is compressed and reproduced by a system to be evaluated A first OT calculator 17 orthogonally transforms a reproduced image, a second OT calculator 21 orthogonally transforms an original image and a subtractor 19 obtains a difference value of the same order coefficients in one block A WSNR calculator 22 weights the difference with a weighting function which varies with a position of a coefficient of orthogonally transformed data and a magnitude of an AC power in the block after orthogonal transform of the original image and subsequently obtains an average weighted S/N ratio of each video frame or a plurality of video frames Finally, a subjective evaluation value calculator 23 calculates a subjective evaluation value (deterioration percentage) according to the average weighted S/N ratio Consequently, the present invention enables to provide a video quality evaluating equipment for a reproduced image of a video signal subject to digital compression capable of economically evaluating the video quality in a short period of time

30 citations


Proceedings ArticleDOI
01 May 1995
TL;DR: The chip set combined with synchronous DRAMs (SDRAM) supports the whole layer processing including rate-control and realizes the real-time encoding for ITU-R-601 resolution video (720/spl times/480 pixels at 30 frame/s) with glueless logic.
Abstract: This paper describes a chip set architecture for a programmable video encoder based on the MPEG2 main profile at main level (MP@ML). The chip set consists of a Controller-LSI (C-LSI), a macroblock level Pixel Processor-LSI (P-LSI) and a Motion Estimation-LSI (ME-LSI). The chip set combined with synchronous DRAMs (SDRAM) supports the whole layer processing including rate-control and realizes the real-time encoding for ITU-R-601 resolution video (720/spl times/480 pixels at 30 frame/s) with glueless logic. The exhaustive motion estimation capability is scalable up to +-63.5/+-15.5 in the horizontal/vertical directions. This chip set solution can realize a low cost MPEG2 video encoder system with excellent video quality on a small PC card.

25 citations


Patent
18 Dec 1995
TL;DR: In this paper, a video encoder allocates bits among successive frames to maximize overall perceived video quality when the encoded video sequence is decoded and displayed, but the ongoing allocation process is constrained by the need to avoid buffer underflow and overflow conditions at the decoder.
Abstract: Successive frames in a video sequence are encoded by a video encoder. The bits are apportioned among successive frames to maximize overall perceived video quality when the encoded video sequence is decoded and displayed. The ongoing allocation process is constrained by the need to avoid decoder buffer exception, i.e., buffer underflow and overflow conditions, at the decoder.

Proceedings ArticleDOI
18 Jun 1995
TL;DR: A video encoding scheme which maintains the quality of the encoded video at a constant level based on a quantitative video quality measure, and it uses a feedback control mechanism to control the parameters of the encoder.
Abstract: Lossy video compression algorithms, such as those used in the H.261 and MPEG standards, result in quality degradation seen in the form of tiling, edge business, and mosquito noise. The number of bits required to encode a scene so as to achieve a given quality objective depends on the scene content; the more complex the scene is, the more bits are required. Therefore, in order to achieve a given video quality at all times, the encoder parameters must be appropriately adjusted according to the scene content. The authors propose a video encoding scheme which maintains the quality of the encoded video at a constant level. This scheme is based on a quantitative video quality measure, and it uses a feedback control mechanism to control the parameters of the encoder. The authors evaluate this scheme by applying it to test sequences, and compare it with constant bit rate and open-loop variable bit rate schemes in terms of quality and rate. They show that their scheme achieves better quality than the other two schemes for a given total number of bits, particularly when the content has scene changes over time.

Proceedings ArticleDOI
09 May 1995
TL;DR: A new MPEG encoder extracts four features from the input and output video sequences and feeds those features into a four-layered back-propagation neural network which has been trained by subjective testing.
Abstract: We present an video objective quality measure which has good correlation with subjective tests. We then introduce the objective measure into the design of an MPEG encoder. The new MPEG encoder extracts four features (bit rate, a feature that measures blockiness, one that measures false edges, and one that measures blurred edges) from the input and output video sequences and feeds those features into a four-layered back-propagation neural network which has been trained by subjective testing. Then the system uses a simple feedback technique to adjust the GOP (group of pictures) bit-rate to achieve a constant subjective quality output video sequence.

Proceedings ArticleDOI
14 Nov 1995
TL;DR: The dynamic resolution control scheme is effective under high-load conditions and achieves better performance than the packet discard scheme, and a combination of the two schemes provides no substantial improvement.
Abstract: This paper studies two congestion control schemes for integrated variable bit-rate video and data transmission in a centrally controlled wireless LAN. One is a dynamic resolution control scheme which switches the spatial or temporal resolution of video according to the network loads; the other is a packet discard scheme which discards video packets with elapsed time of more than a threshold value. In addition to selective repeat ARQ control over both video and data, we apply the two schemes to video transmission separately or jointly. To examine the efficiency of the schemes, we carry out simulations using a sequence of real video frames obtained from a JPEG encoded movie. We also evaluate the video quality of the simulation output by subjective assessment. As a result, we find that the dynamic resolution control scheme is effective under high-load conditions and achieves better performance than the packet discard scheme. A combination of the two schemes provides no substantial improvement.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: A video quality measure based on block-based features is presented and a MPEG bit allocation algorithm is developed which is based on the quality measure that considers the bit-count and quality at the macroblock level.
Abstract: We present a video quality measure based on block-based features. Four block-based features (an average of FFT differences, a standard deviation of FFT differences, mean absolute deviation of wepstrum differences, and the variance of UVW differences) are extracted from the input and output video sequences and fed into a four-layer backpropagation neural network which has been trained by subjective testing. We then introduce the quality measure into the design of an MPEG encoder to maximize the quality measure. A MPEG bit allocation algorithm is developed which is based on the quality measure that considers the bit-count and quality at the macroblock level. A simple relationship between one macroblock's quality and overall quality is also addressed. The simulation on this optimization scheme has a higher quality measure when compared to the Test Model 5.

Proceedings ArticleDOI
Wei Ding1, Bede Liu1
17 Apr 1995
TL;DR: A feedback method with a rate-quantization model, which can be adapted to changes in picture activities, is developed and used for quantization parameter selection at the frame and slice level.
Abstract: For MPEG video coding and recording applications, it is important to select quantization parameters at slice and macroblock levels to produce nearly constant quality image for a given bit count budget. A well designed rate control strategy can improve overall image quality for video transmission over a constant-bit-rate channel and fulfill editing requirement of video recording, where a certain number of new pictures are encoded to replace consecutive frames on the storage media using at most the same number of bits. In this paper, we developed a feedback method with a rate-quantization model, which can be adapted to changes in picture activities. The model is used for quantization parameter selection at the frame and slice level. Extra computations needed are modest. Experiments show the accuracy of the model and the effectiveness of the proposed rate control method. A new bit allocation algorithm is then proposed for MPEG video coding.

Proceedings ArticleDOI
21 Apr 1995
TL;DR: Experimental results are given for software decoding of VBR MPEG video with both VBR usage parameter control (UPC) and decoder CPU constraints at the encoder, demonstrating improvements in delivered video quality relative to the conventional case without such controls.
Abstract: This paper presents an exploratory view of multimedia processing and transport issues for the wireless personal terminal scenario. In this scenario, portable multimedia computing devices are used to access video/voice/data information and communication services over a broadband wireless network infrastructure. System architecture considerations are discussed, leading to identification of a specific approach based on a unified wired + wireless ATM network, a general-purpose CPU based portable terminal, and a new software architecture foe efficient media handling and quality-of-service (QoS) support. The recently proposed 'wireless ATM' network concept is outlined, and the associated transport interface at the terminal is characterized in terms of available service types (ABR, VBR, CBR) and QoS. A specific MPEG video delivery applications with VBR ATM transport and software decoding is then examined in further detail. Recognizing that software video decoding at the personal terminal represents a major performance bottleneck in this system, the concept of MPEG encoder quantizer control with joint VBR bit-rate and decoder computation constraints is introduced. Experimental results are given for software decoding of VBR MPEG video with both VBR usage parameter control (UPC) and decoder CPU constraints at the encoder, demonstrating improvements in delivered video quality relative to the conventional case without such controls.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: This work compares the performance of a 1-layer main profile and 2-layer data partitioning and SNR scalable MPEG-2 encoders in a system with random channel errors and forward error correction and indicates that SNR scalability may lead to better video quality over a larger range of bit error rates than a1-layer approach.
Abstract: Wireless ATM LANs have the potential to support multi-Mb/s bandwidths to mobile users with guaranteed quality of service. However, the lossy nature of the wireless medium will pose problems for loss-sensitive applications. Techniques to minimize the effect of these losses will therefore be required. In this paper, we examine the use of combined source and channel coding for MPEG video transport in a wireless ATM environment. Although forward error correction (FEC) provides protection against channel bit errors, the bandwidth overhead can become a significant drawback in a fixed bandwidth scenario. Additional protection against losses can be realized by using two-layer video coding. In this work, we compare the performance of a 1-layer main profile and 2-layer data partitioning and SNR scalable MPEG-2 encoders in a system with random channel errors and forward error correction. The results indicate that if the channel bit error rate is known an optimum FEC level can be chosen for the 1-layer case; However, at this fixed FEC level, if a critical bit error rate is exceeded then video quality degrades dramatically. The 2-layer cases appear to lead to more graceful degradation in quality at this critical bit error rates. In particular, SNR scalability may lead to better video quality over a larger range of bit error rates than a 1-layer approach.

Proceedings ArticleDOI
06 Nov 1995
TL;DR: After summarizing major causes of degradation such as variable length codes and hierarchical data structures, some techniques to improve the error resiliency of speech and video coding schemes are proposed.
Abstract: Low bit rate audio-visual coding algorithms are now being actively standardized in ITU and ISO/IEC. Their application to mobile multi-media communication systems is being considered. One of the most important features in this application is robustness against the transmission errors experienced in mobile radio channels. In this paper, the speech and video quality degradations due to errors are analyzed through computer simulations. ACELP and H.263 are used as representative low bit rate coding algorithms. After summarizing major causes of degradation such as variable length codes and hierarchical data structures, some techniques to improve the error resiliency of speech and video coding schemes are proposed.

09 Jan 1995
TL;DR: An algorithm for quantifying the spatial gradients or edges in an image as a function of angle or orientation and deriving objective video quality metrics from these low bandwidth features that quantify the amount of tiling and blurring in the output image.
Abstract: Describes an algorithm for (1) quantifying the spatial gradients or edges in an image as a function of angle or orientation, (2) extracting low bandwidth features from these input and corresponding output spatial gradient images, and (3) deriving objective video quality metrics from these low bandwidth features that quantify the amount of tiling (ie, blocking) and blurring in the output image Examples are given for images that have been compressed by MPEG and video teleconferencing codecs

Proceedings ArticleDOI
D. Wang1, J. Hartung1
09 May 1995
TL;DR: A codebook adaptation algorithm is described which uses an "equidistortion principle" and a competitive learning algorithm to continuously adapt the codewords, which results in an increased use of the more efficient vector quantizer and improved video quality.
Abstract: Proposes a codebook adaptation algorithm for very low bit rate, real-time video coding. Although adaptive codebook design has been studied in the past, its implementation at very low coding rates suitable for the MPEG4 standard remains significantly challenging. The coder uses a standard motion compensated predictor with DCT quantization. It is unique in that it uses a hybrid scalar/vector quantizer to code predictor residuals. Bits are dynamically allocated to minimize distortion in the current frame, and scalar quantized blocks are used to adapt the VQ codebook. A codebook adaptation algorithm is described which uses an "equidistortion principle" and a competitive learning algorithm to continuously adapt the codewords. This training algorithm results in an increased use of the more efficient vector quantizer and improved video quality.

Journal Article
TL;DR: A pair of related graphics accelerator chips that integrate video rendering primitives with two-dimensional and threedimensional synthetic graphics primitives are designed.
Abstract: The fusion of multimedia and traditional computer graphics has long been predicted but has been slow to happen. The delay is due to many factors, including their dramatically different data type and bandwidth requirements. Digital has designed a pair of related graphics accelerator chips that integrate video rendering primitives with two-dimensional and threedimensional synthetic graphics primitives. The chips perform one-dimensional filtering and scaling on either YUV or RGB source data. One implementation dithers YUV source data down to 256 colors. The other converts YUV to 24-bit RGB, which is then optionally dithered. Both chips leave image decompression to the CPU. The result is significantly faster frame rates at higher video quality, especially for displaying enlarged images. The paper compares the implementation cost of various design alternatives and presents performance comparisons with software image rendering.

Proceedings ArticleDOI
16 Oct 1995
TL;DR: This paper proposes a distributed programmable motion estimator architecture and optimized its processing engine for hardware efficiency and the control engine for flexibility and achieves near full search quality.
Abstract: Future generations of video codecs need programmable video processing capabilities to extend the range of applications. A key component of the video codec design is the motion estimator. Because of its high computational requirements, a programmable motion estimator design must carefully balance its programmability and cost-effectiveness. In this paper we propose a distributed programmable motion estimator architecture and optimize its processing engine for hardware efficiency and the control engine for flexibility. The distributed architecture models motion estimation as searching through a tree of vector points, where the traversing functions are implemented in a multi-mode vector search engine and the hierarchy is constructed by an algorithm controller with flexible DMA transfers. Based on the distributed architecture a programmable motion estimator is implemented within a 0.5 /spl mu/m CMOS video codec for multiple video standards. The programmable motion estimator achieves near full search quality (degradation less than 0.1 dB for CIF 30 f/s H.261) with only 1/4 of processing and memory resources and can be reused for H.26X and MPEG coding across a wide range of resolution and video quality tradeoff points.

Proceedings ArticleDOI
14 Nov 1995
TL;DR: It is found that in order to achieve high resolution video conferencing at the desktop, a dedicated hardware decoder is needed and software handling of the MPEG-1 packets introduces jitter greater than 250 ms due to the underlying non-real time operating system.
Abstract: This paper describes the experimental transport of a full motion MPEG-1 system stream (combines audio, video and timing information) over an ATM network. We analysed the factors related to the host interface that affected the video and audio quality. We found that in order to achieve high resolution video conferencing at the desktop, a dedicated hardware decoder is needed. Furthermore, software handling of the MPEG-1 packets introduces jitter greater than 250 ms due to the underlying non-real time operating system. We plan to use the results to define and develop a dedicated network-interface and audio-video decoder-encoder subsystem that unloads the PC and facilitates video conferencing and video playback.

Patent
14 Apr 1995
TL;DR: In this article, the authors proposed a relational expression of parameters showing the audio, video and their total quality for the evaluation standard signals of audio and video which are previously obtained through the subject evaluation.
Abstract: PURPOSE: To estimate the total quality of audio and video without giving the subjective evaluation to the signal to be evaluated by obtaining a relational expression of parameters showing the audio, video and their total quality for the evaluation standard signals of audio and video which are previously obtained through the subject evaluation. CONSTITUTION: The parameters (Q a , Q v , Q av ) showing the audio, video and total quality are defined respectively as the noise ratios to peak signal (SNR a , SNR v , SNR av ), and changed into 20 types of audio/video signals AV. In the total quality evaluation, the signals AV having Q a and Q v of the standard signals equal to each other are used as the anchor signals (a). Then Q a of the signal (a) is changed into Q av and the subjective evaluation is applied to plural signals (a) of different Q av . The total quality of an evaluation object is converted into Q av of signals (a) having the same subject evaluation value V based on the value of the evaluation subject and the signal (a). Then a relational expression F is obtained based on the result showing the quality of standard signals AV of 20 types of conditions in Q av . At the same time, the conversion value obtained between Q v and Q a is calculated based on every quality evaluation for the audio/video quality of the signal to be evaluated. Thus the total quality can be estimated by the expression F and without performing the subject evaluation. COPYRIGHT: (C)1996,JPO

Proceedings ArticleDOI
26 Mar 1995
TL;DR: This work applies MPEG 1 video traffic sources with FEC to the simulated network using a novel rate based protocol to adapt the compression ratio of the video source to avoid unrecoverable packet loss due to network congestion.
Abstract: Packet loss is widely used as an indication of network congestion over networks which do not provide bandwidth reservation. We use packet loss in conjunction with FEC as a means of congestion control for MPEG video sources. The receiver implements FEC to recover from packet loss. However, the level of packet losses are recorded and transmitted back to the source which acts on this information to adjust the compression ratio and thereby alleviate congestion. We describe a simulation study whereby a real MPEG video sequence with FEC is packetised into IP style datagrams and then transmitted through a simple network based on an output buffered multiplexor. We apply MPEG 1 video traffic sources with FEC to the simulated network using a novel rate based protocol to adapt the compression ratio of the video source to avoid unrecoverable packet loss due to network congestion. Packet loss is used as a measure of congestion at the receiving nodes. The receiving nodes periodically transmit this packet loss information back to the sending nodes. The FEC allows the receiving nodes to recover from the low levels of packet loss which occur at the onset of congestion thereby avoiding considerable video quality loss.

Journal ArticleDOI
TL;DR: This paper defines the important factors that determine video quality in compression systems and describes the differing requirements placed on compression by post-production and distribution and concludes by examining results from three cascaded compression experiments using JPEG and MPEG-2.
Abstract: Post-production equipment increasingly uses video compression to achieve economies of storage and bandwidth. Users of compressed video systems need to be aware of the factors that determine video quality. As compression technology becomes more widespread, the cascading of multiple compression systems increases. The compression techniques that are optimal for transmission and distribution of video are not the same as those optimal for post-production. This paper defines the important factors that determine video quality in compression systems. It describes the differing requirements placed on compression by post-production and distribution and concludes by examining results from three cascaded compression experiments using JPEG and MPEG-2.

Proceedings ArticleDOI
04 Jul 1995
TL;DR: An intelligent buffering algorithm is proposed for preventing buffer overflow and for smoothing out the occupancy fluctuation and the performance of the proposed algorithm has been verified on an MPEG1 encoder.
Abstract: In MPEG video encoding, efficient buffering and rate control is especially crucial for constant bit rate (CBR) applications such as non-ATM (asynchronous transfer mode) channels and satellite communication channels. In a CBR environment, compressed video data, which is inherently variable in terms of bit rate, should be throttled to a channel with fixed rate by managing the buffer operation. At lower transmission rates, or in the case of an abrupt scene change, a dramatic increase in buffer occupancy or a buffer overflow occurs and this may cause an interruption to normal encoding and consequent degradation of video quality. An intelligent buffering algorithm is proposed for preventing buffer overflow and for smoothing out the occupancy fluctuation. The algorithm exploits major system parameters which have direct influence on one another in the MPEG encoder. The performance of the proposed algorithm has been verified on an MPEG1 encoder.

Patent
11 Aug 1995
TL;DR: In this article, a band pass filter is provided to a transmission line except a video transmission line so as to eliminate a video signal band and attenuate the leakage of the video signal without impairing video quality through the installation of an inexpensive transmission line.
Abstract: PURPOSE:To reduce the cost by providing a band pass filter to a transmission line except a video transmission line so as to eliminate a video signal band thereby eliminating or attenuating the leakage of the video signal without impairing video quality through the installation of an inexpensive transmission line. CONSTITUTION:Filters Xb, Xc attenuating a frequency band of a video signal are provided to a branch side of each branched wave of a speech transmission line Mb and a control transmission line Mc except video transmission line Ma to attenuate the video signal outputted from a video control panel C. However, a control signal outputted from a supervisory panel D and a sound signal outputted from a lobby interphone B are not attenuated. Then the leakage of the signal from the video signal line Ma to the speech transmission line Mb and the control transmission line Mc is attenuated by the filters Xb, Xc to prevent leakage from the lines Mb, Mc to the video signal line Ma again. Thus, the signal wires are used in place of the coaxial cable or the like to attain the use of the inexpensive multi-pair balanced cables without incurring the development quality thereby reducing the wiring cost.

09 Nov 1995
TL;DR: To find examples of exemplary and problematic techniques, over 32 hours of distance education classes were scanned, and the points of point that operate in a distance education context were identified and a prototype was constructed that showed points of view and provided information about sequencing.
Abstract: ducators participating in distance education have generally not received training in the production of effective video, although they do need to be able to appear in video suitable for effective instruction. The level of video quality required is referred to as "informal" video. Rapid prototyping is a concept in which formative evaluation is implemented using low-fidelity products that simulate the high-fidelity product used in summative evaluation. This technique, which is helpful in the early stages of development, can be used in video production. To find examples of exemplary and problematic techniques, over 32 hours of distance education classes were scanned, and the points of view that operate in a distance education context were identified. Using study results, a prototype was constructed that showed points of view and provided information about sequencing. An appendix contains a pedagogical table, samples of scripts and a video log, and a nonlinear editor screen sample. (SLD) *********************************************************************** Reproductions supplied by EDRS are the best that can be made * from the original document. ***********************************************************************