scispace - formally typeset
Search or ask a question

Showing papers on "Video quality published in 1997"


Journal ArticleDOI
TL;DR: The results indicate that variable-rate transmission can increase the quality of the decoded sequences without increases in the end-to-end delay, and it is shown that for the leaky-bucket channel, the channel constraints can be combined with the buffer constraints, such that the system is identical to CBR transmission with an additional, infrequently imposed constraint.
Abstract: Variable bit-rate (VBR) transmission of video over ATM networks has long been said to provide substantial benefits, both in terms of network utilization and video quality, when compared with conventional constant bit-rate (CBR) approaches. However, realistic VBR transmission environments will certainly impose constraints on the rate that each source can submit to the network. We formalize the problem of optimizing the quality of the transmitted video by jointly selecting the source rate (number of bits used for a given frame) and the channel rate (number of bits transmitted during a given frame interval). This selection is subject to two sets of constraints, namely, (1) the end-to-end delay has to be constant to allow for real-time video display and (2) the transmission rate has to be consistent with the traffic parameters negotiated by user and network. For a general class of constraints, including such popular ones as the leaky bucket, we introduce an algorithm to find the optimal solution to this problem. This algorithm allows us to compare VBR and CBR under the same end-to-end delay constraints. Our results indicate that variable-rate transmission can increase the quality of the decoded sequences without increases in the end-to-end delay. Finally, we show that for the leaky-bucket channel, the channel constraints can be combined with the buffer constraints, such that the system is identical to CBR transmission with an additional, infrequently imposed constraint. Therefore, video quality with a leaky-bucket channel can achieve the same quality of a CBR channel with larger physical buffers, without adding to the physical delay in the system.

180 citations


Proceedings ArticleDOI
01 Nov 1997
TL;DR: A new method for making correspondences between image clues detected by image analysis and Iangriage clries detected by natural language analysis is proposed and applied to closed-captioned C N N Headline News.
Abstract: Spotting by Association method for video analysis is a novel metliod to detect video segments with typical semantics. Video data contains various kinds of information through continuous images, natural language, and sound. For videos to be stored and retrieved in a Digital Library, it is essential to segment the video data into meaningful pieces. To detect meaningful segments, we need to identify the segment in each modality (video, language, and sound) that corresponds to the same story. For this purpose, we propose a new method for making correspondences between image clues detected by image analysis and Iangriage clries detected by natural language analysis. As a result, relevant video segments with sufficient informat ion froni every modality are obtained. We applied OUT nietliod to closed-captioned C N N Headline News. Video segments with important events, such as a public speech, meeting, or visit. are detc-cted fairly well.

120 citations


Journal ArticleDOI
Wei Ding1
TL;DR: It is argued that a rate control scheme has to balance both issues of consistent video quality on the encoder side and bitstream smoothness for the SMG on the network side and the effectiveness of the proposed algorithm is shown.
Abstract: Rate control is considered an important issue in video coding, since it significantly affects video quality. We discuss joint encoder and channel rate control for variable bit-rate (VBR) video over packet-switched ATM networks. Since variable bit-rate traffic is allowed in such networks, an open-loop encoder without rate control can generate consistent quality video. However, in order to improve the statistical multiplexing gain (SMG), an encoder buffer is essential to meet traffic constraints imposed by networks and to smooth the highly variable video bitstream. Due to the finite buffer size, some forms of encoder rate control have to be enforced, and consequently, video quality varies. We argue that a rate control scheme has to balance both issues of consistent video quality on the encoder side and bitstream smoothness for the SMG on the network side. We present a joint encoder and channel rate control algorithm for ATM networks with leaky buckets as open-loop source flow control models. The algorithm considers constraints imposed by the encoder and decoder buffers, the leaky bucket control, traffic smoothing, and rate control. The encoder rate control is separated into a sustainable-rate control and a unidirectional instantaneous-rate control. It can improve the problem of leaky bucket saturation exhibited in previous works. Experimental results with MPEG video are presented. The results verify our analysis and show the effectiveness of the proposed algorithm.

113 citations


Book ChapterDOI
01 Jan 1997
TL;DR: A method of QoS mapping between user’s preference on video quality and a required bandwidth to transport the video across the network and the mapping method is investigated, which is quantified by MOS (Mean Opinion Score) evaluation.
Abstract: In this paper, we present a method of QoS mapping between user’s preference on video quality and a required bandwidth to transport the video across the network. We first investigate the mapping method from QoS parameters to the required bandwidth on the network. For this purpose, we assume that the underlying network supports some bandwidth allocation mechanism, such as DBR service class in ATM, RSVP, IPv6 and so on. Then, for given QoS parameters in terms of spatial, SNR, and timely resolutions, the required bandwidth to support the MPEG-2 video transmission is determined by analyzing the traced MPEG-2 streams. We next consider the mapping method between QoS parameters and the user’s perceived video quality, which is quantified by MOS (Mean Opinion Score) evaluation. Based on the above results, we discuss a systematic method to estimate the required bandwidth to guarantee user’s preference on video quality.

79 citations


Journal ArticleDOI
TL;DR: An algorithm which employs multiple Lagrange multipliers to find the constrained bit allocation is developed, and the solution is optimal for both CBR and VBR transmission if the full constrained tree is searched.
Abstract: We consider optimal encoding of video sequences for ATM networks. Two cases are investigated. In one, the video units are coded independently (e.g., motion JPEG), while in the other, the coding quality of a later picture may depend on that of an earlier picture (e.g., H.26x and MPEGx). The aggregate distortion-rate relationship for the latter case exhibits a tree structure, and its solution commands a higher degree of complexity than the former. For independent coding, we develop an algorithm which employs multiple Lagrange multipliers to find the constrained bit allocation. This algorithm is optimal up to a convex-hull approximation of the distortion-rate relations in the case of CBR (constant bit-rate) transmission. It is suboptimal in the case of VBR (variable bit-rate) transmission by the use of a suboptimal transmission rate control mechanism for simplicity. For dependent coding, the Lagrange-multiplier approach becomes rather unwieldy, and a constrained tree search method is used. The solution is optimal for both CBR and VBR transmission if the full constrained tree is searched. Simulation results are presented which confirm the superiority in coding quality of the encoding algorithms. We also compare the coded video quality and other characteristics of VBR and CBR transmission.

76 citations


Journal ArticleDOI
TL;DR: Objective video quality assessment methods that can be used for continuous picture quality monitoring of digital encoding equipment at the entrance of the DVB network will be surveyed, along with parameters that could be extracted and transmitted to other parts of the network.
Abstract: In digital television networks, where the bitrates used for the transmission of video signals are variable, the issue of quality of service is of great importance. The QoS parameter of concern in this paper is the quality of MPEG-2 compressed video. Objective video quality assessment methods that can be used for continuous picture quality monitoring of digital encoding equipment at the entrance of the DVB network will be surveyed, along with parameters that can be extracted and transmitted to other parts of the network. The main objective of the contribution is to give an overview of the possibilities that exist and the progress that has been made so far.

73 citations


Journal ArticleDOI
TL;DR: Results suggest that quiz performance does not suffer under reduced video quality conditions, but subjective satisfaction significantly decreases, and guidelines for implementing and using DVC systems in distance learning applications are provided.
Abstract: Distance learning applications can now make use of networked computers to transmit and display video, audio, and graphics. However, desktop video conferencing systems (DVC) often display degraded images due to bandwidth restrictions and computer processing limitations. The literature on the influence of video parameters such as frame rate and resolution with respect to subjective opinions and human performance is sparse. A two-part study involving a controlled laboratory experiment and a field study evaluation was conducted on technical parameters affecting the suitability of DVC for distance learning. In the laboratory study, three frame rate conditions (1, 6, and 30 frames per second), two resolution conditions (160 × 120 and 320 × 240), and three communication channel conditions were manipulated. Dependent measures included performance on a quiz and subjective satisfaction with the image quality. Results suggest that quiz performance does not suffer under reduced video quality conditions, but subjective satisfaction significantly decreases. The field study employed similar dependent measures and indicates that students in real classroom situations may be less critical of poor video quality than in laboratory settings and confirms the results from the laboratory study in that performance does not suffer. However, the current state-of-the-art of video conferencing technology needs to be improved and configured most effectively to support college teaching at a distance. Guidelines for implementing and using DVC systems in distance learning applications are provided.

65 citations


Patent
29 Jan 1997
TL;DR: In this paper, a method of automatic measurement of compressed video quality superimposes special markings in the active video region of a subset of contiguous frames within a test video sequence, providing a temporal reference, a spatial reference, gain/level reference, measurement code, a test sequence identifier and/or a prior measurement value.
Abstract: A method of automatic measurement of compressed video quality superimposes special markings in the active video region of a subset of contiguous frames within a test video sequence. The special markings provide a temporal reference, a spatial reference, a gain/level reference, a measurement code, a test sequence identifier and/or a prior measurement value. The temporal reference is used by a programmable instrument to extract a processed frame from the test video sequence after compression encoding-decoding which is temporally aligned with a reference frame from the test video sequence. The spatial reference is used by the programmable instrument to spatially align the processed frame to the reference frame. The measurement code is used by the programmable instrument to select the appropriate measurement protocol from among a plurality of measurement protocols. In this way video quality measures for a compression encoding-decoding system are determined automatically as a function of the special markings within the test video sequence.

58 citations


Journal ArticleDOI
TL;DR: Simulation results show that the studied feedback mechanisms provide a quality of video comparable to a constant bit rate (CBR) connection reserving the same amount of bandwidth when unutilized network bandwidth becomes available.
Abstract: While existing research shows that reactive congestion control mechanisms are capable of providing high video quality and channel utilization for point-to-point real-time video, there has been relatively little study of the reactive congestion control of point-to-multipoint video, especially in ATM networks. Problems complicating the provision of multipoint, feedback-based real-time video service include: (1) implosion of feedback returning to the source as the number of multicast destinations increases and (2) variance in the amount of available bandwidth on different branches in the multipoint connection. A new service architecture is proposed for real-time multicast video, and two multipoint feedback mechanisms to support this service are introduced and studied. The mechanisms support a minimum bandwidth guarantee and the best effort support of video traffic exceeding the minimum rate. They both rely on adaptive, multilayered coding at the video source and closed-loop feedback from the network in order to control both the high and low priority video generation rates of the video encoder. Simulation results show that the studied feedback mechanisms provide, at the minimum, a quality of video comparable to a constant bit rate (CBR) connection reserving the same amount of bandwidth. When unutilized network bandwidth becomes available, the mechanisms are capable of exploiting it to dynamically improve video quality beyond the minimum guaranteed level.

51 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: A fade detector that exploits the changes in the average luminosities as well as the semi-parabolic behavior of the variances of the frames in the fade region is developed and results indicate that the developed detector has over a 95% reliability rate.
Abstract: The use of fade in video production to smooth scene changes and enhance the video quality complicates subsequent video compression or video editing. It is important to detect the fade regions in order to improve the quality of the compressed video or to allow automatic parsing of the video for the purpose of editing and database indexing. A fade detector that exploits the changes in the average luminosities as well as the semi-parabolic behavior of the variances of the frames in the fade region is developed. Simulation results indicate that the developed detector has over a 95% reliability rate.

50 citations


Patent
Boon-Lock Yeo1
05 Feb 1997
TL;DR: In this paper, a system and method for browsing dynamic video information over a network includes selecting a small subset of frames from video shots and collections, using a threshold difference between two compared frames, to capture the dynamic content.
Abstract: A system and method for browsing dynamic video information over a network includes selecting a small subset of frames from video shots and collections, using a threshold difference between two compared frames, to capture the dynamic content. Further selection and interleaving the selected frames within the shots and collections can be done to satisfy resource constraints like utilization of bandwidth. Interleaving is performed in storage and transmission of the frames to improve presentation of video information and to simultaneously display selected frames on a computer display to convey the dynamic of video. The system and method permit a dynamic summary of video information to be sent to a user over a network, while reducing resources (bandwidth) used by the network for the amount of information presented to the user.

Journal ArticleDOI
TL;DR: This paper examines the glitching that occurs when the video information is not delivered on time at the receiver, and characterize various glitching quantities, such as the glitch duration, total number of macroblocks unavailable per glitch, and the maximum unavailable area per glitch.
Abstract: In this paper, we present the performance of asynchronous transfer mode (ATM) networks supporting audio/video traffic. The performance evaluation is done by means of a computer simulation model driven by real video traffic generated by encoding video sequences. We examine the glitching that occurs when the video information is not delivered on time at the receiver; we characterize various glitching quantities, such as the glitch duration, total number of macroblocks unavailable per glitch, and the maximum unavailable area per glitch. For various types of video contents, we compare the maximum number of constant bit-rate (CBR) and constant-quality variable bit-rate (CQ-VBR) video streams that can be supported by the network while meeting the same end-to-end delay constraint, the same level of encoded video quality, and the same glitch rate constraint. We show that when the video content is highly variable, many more CQ-VBR streams than CBR streams can be supported under given quality and delay constraints, while for relatively uniform video contents (as in a videoconferencing session), the number of CBR and CQ-VBR streams supportable is about the same. We also compare the results with those obtained for a 100Base-T Ethernet segment. We then consider heterogeneous traffic scenarios, and show that when video streams with different content, encoding scheme, and encoder control schemes are mixed, the results are at intermediate points compared to the homogeneous cases, and the maximum number of supportable streams of a given type can be determined in the presence of other types of video traffic by considering an "effective bandwidth" for each of the stream types. We consider multihop ATM network scenarios as well, and show that the number of video streams that can be supported on a given network node is very weakly dependent on the number of hops that the video streams traverse. Finally, we also consider scenarios with mixtures of video streams and bursty traffic, and determine the effect of bursty traffic load and burst size on the video performance.

Proceedings ArticleDOI
01 Jan 1997
TL;DR: Two methods for mining association rules with adjustable accuracy are developed and it is shown that with the advantage of controlled sampling, the proposed methods are very flexible and efficient, and can in general lead to results of a very high degree of accuracy.
Abstract: In this paper, we devise efficient algorithms for mining association rules with adjustable accuracy. It is noted that several applications require mining the transaction data to capture the customer behavior frequently. In those applications, the efficiency of data mining could be a more important faktor t.han the requirement for complete accuracy of the mining results. Allowing imprecise results can significantly improve the data mining efficiency. In this paper, two methods for mining association rules with adjustable accuracy are developed. By dealing with the concept of sampling, both methods obtain some essential knowledge from a sampled subset first, and in light of that knowledge, perform efficient association rule mining on the entire database. A technique of relaxing the support factor based on the sampling size is devised to achieve the desired level of accuracy. These two methods differ from each other in their ways of utilizing the sampled data. Performance of these two methods is comparatively analyzed. As shown by our experimental results, the relaxation factor, as well as the sample size, can be properly adjusted so as to improve the result accuracy while minimizing the corresponding execution time, thereby allowing us to effectively achieve a design trade-off between accuracy and efficiency with two control parameters. It is shown that with the advantage of controlled sampling, the proposed methods are very flexible and efficient, and can in general lead to results of a very high degree of accuracy.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: This paper proposes the use of motion analysis (MA) to adapt to scene content and can achieve from 2% to 13.9% savings in bits while maintaining similar quality.
Abstract: The MPEG video compression standard effectively exploits spatial, temporal, and coding redundancies in the algorithm In its generic form, however, only a minimal amount of scene adaptation is performed Video can be further compressed by taking advantage of scenes where the temporal statistics allow larger inter-reference frame distances This paper proposes the use of motion analysis (MA) to adapt to scene content The actual picture type (I, P, or B) decision is made by examining the accumulation of motion measurements since the last reference frame was labeled Depending on the video content, this proposed algorithm can achieve from 2% to 139% savings in bits while maintaining similar quality

Journal ArticleDOI
06 Feb 1997
TL;DR: I.McIC is a video encoder for storage applications where higher bit rates can be tolerated but the video signals to be encoded might not be as clean as typical studio standards, therefore, noise reduction is an integral part of I. McIC.
Abstract: MPEG2 encoding is done mainly by distributors and publishers using professional equipment too expensive for the consumer market. I.McIC is a video encoder for this market, in particular for storage applications where higher bit rates can he tolerated (5-15 Mb/s) compared with bit rates for transmission (1.5-8 Mb/s). I.McIC operates in MPEG ML@SP mode and offers good video quality at 5 Mb/s and excellent quality at 10 Mb/s. For consumer storage applications, the video signals to be encoded might not be as clean as typical studio standards. Therefore, noise reduction is an integral part of I.McIC. I.McIC can share 16 Mb DRAM with an MPEG2 video decoder, organized as 4 times 4 Mb devices with 60 ns access. It can handle both 50 Hz and 60 Hz video sources. To interface to a video source, I.McIC uses a line-locked clock generated by the A/D converter and running at 27 MHz.

Proceedings ArticleDOI
10 Jan 1997
TL;DR: In this paper, the authors proposed a rate control algorithm based on dynamic programming combined with automatic repeat request (ARQ) as the error control mechanism for video transmission over wireless links, which can achieve better video quality with the available bandwidth and recover from the errors due to channel degradation.
Abstract: Video transmission over wireless links is an emerging application which involves a time-varying channel. Compared to other transmission media, wireless links suffer from limited bandwidth, and are more likely to see their performance degrade due to multipath fading. Therefore error control mechanisms, which can achieve better video quality with the available bandwidth and recover from the errors due to the channel degradation, are very important in wireless video transmission systems. Many of the proposed wireless communications systems are likely to be two-way so that a return channel can convey information to the transmitter about the channel state. Recent research has considered ways of improving the transmission reliability by making use of the feedback channel for 'closed loop' error control, including various forms of retransmission. In this paper we propose a rate control algorithm based on dynamic programming combined with automatic repeat request (ARQ) as the error control mechanism. We formalize the constraints imposed by the real-time characteristics of video. We show how when an appropriate model for the channel is available, the overall robustness of the systems can be improved through rate control at the source using the channel state information conveyed by the ARQ acknowledgement.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
03 Nov 1997
TL;DR: In this article, a metric whose value for a given block position is indicative of the relative importance of the given block positions in determining decoded video quality is used to determine one or more of the block positions which are more likely than other block positions to improve decoding performance if refreshed.
Abstract: Error recovery is improved in a video transmission system by maintaining, for block positions in a frame, a metric whose value for a given block position is indicative of the relative importance of the given block position in determining decoded video quality. The blocks may be macroblocks configured in accordance with a motion-compensated video compression technique such as MPEG-2. The metric values are used to determine one or more of the block positions which are more likely than other block positions to improve decoded video quality if refreshed. The video signal is then refreshed by transmitting intra-coded blocks for the determined block positions in a given frame. The metric may be based on a count of the number of times a block from a given block position has been coded and transmitted since it was last refreshed, such that the block positions having the highest counts are refreshed. The metric-based selection of refresh blocks ensures that the refresh bits are allocated to the blocks most likely to improve decoded video quality.

Journal ArticleDOI
TL;DR: The MPEG-4 video subjective tests were successful, providing the MPEG community with critical information to guide in the selection of technologies for inclusion in the video part of the MPEG- 4 standard.
Abstract: A new audio-visual coding standard, MPEG-4, is currently under development. MPEG-4 will address not only compression, but also completely new audio-video coding functionalities related to content-based interactivity and universal access. As part of the MPEG-4 standardization process, in November, 1995 assessments were performed on technologies proposed for incorporation in the standard. These assessments included formal subjective tests, as well as expert panel evaluations. This paper describes the MPEG-4 video formal subjective tests. Since MPEG-4 addresses new coding functionalities, and also operates at bit-rates lower than ever subjectively tested before on a large scale, standard ITU test methods were not directly applicable. These methods had to be adapted, and even new test methods devised, for the MPEG-4 video subjective tests. We describe here the test methods used in the MPEG-4 video subjective tests, how the tests were carried out, and how the test results were interpreted. We also evaluate the successes and shortcomings of the MPEG-4 video subjective tests, and suggest possible improvements for future tests. The MPEG-4 video subjective tests were successful, providing the MPEG community with critical information to guide in the selection of technologies for inclusion in the video part of the MPEG-4 standard.

Book ChapterDOI
01 Jun 1997
TL;DR: This work proposes a methodology for designing video indexing schemes which use low level machine derivable indices to map into the set of application specific desired video indices.
Abstract: Indexing video data is essential for providing content based access. Indexing has typically been viewed either from a manual annotation perspective or from an image sequence processing perspective. This work proposes a methodology for designing video indexing schemes which use low level machine derivable indices to map into the set of application specific desired video indices. The indexing procedure uses image sequence processing and application requirements analysis to arrive at the low level and desired indices. The mapping is created based on the domain constraints. A mapping efficacy measure is presented. Experimental results of indexing video using image motion features are presented.

Proceedings ArticleDOI
02 Dec 1997
TL;DR: An extremely efficient MPEG-2 video transcoding method is proposed, which has a low buffer requirement and results in a low-delay and does not need the computational intensive motion estimation.
Abstract: An extremely efficient MPEG-2 video transcoding method is proposed, which has a low buffer requirement and results in a low-delay. Most importantly, the proposed method does not need the computational intensive motion estimation. Simulation results demonstrate that the proposed approach results in very consistent video quality and maintains the buffer level effectively.

Patent
31 Mar 1997
TL;DR: In this paper, a convergence device for computer and television is presented, which includes a computer, a display monitor for displaying images in both the computer mode and the television mode, and multiple video inputs for receiving various types of video signals, each being selectable during operation in television mode.
Abstract: A computer convergence device, operable in a computer mode and, for example, a television mode, includes a computer, a display monitor for displaying images in both the computer mode and the television mode, and multiple video inputs for receiving various types of video signals, each being selectable during operation in television mode. A controller device is coupled to the computer for independently controlling and storing user selected video geometry settings for both the computer mode and the television mode, and further for independently controlling and storing user selected video quality settings for the computer mode, and each of the various video inputs.


Proceedings ArticleDOI
M. Balakrishnan1, R. Cohen
26 Oct 1997
TL;DR: An algorithm is presented for increasing the overall video quality of multiple variable rate video encoders multiplexed onto a single constant bit-rate channel, which enables an encoder, that is processing complex video, to use more of the channel bandwidth than anEncoder that is compressing less complex video.
Abstract: An algorithm is presented for increasing the overall video quality of multiple variable rate video encoders multiplexed onto a single constant bit-rate channel. This enables an encoder, that is processing complex video, to use more of the channel bandwidth than an encoder that is compressing less complex video. The encoder rates are continually adapted. A rate controller determines the channel rate for each encoder as well as the target rate allocated to individual pictures encoded by the encoders. The target rates for individual pictures are computed in order to maintain constant quality across encoders while the encoder channel rates are derived so as to avoid encoder and decoder buffer underflow and overflow.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: This paper proposes a rate-distortion based control scheme where the pre-filtering and block classification are used to achieve higher quality at low rates and shows significant reductions of the blockiness usually encountered atLow rates, when compared to schemes such as TM5.
Abstract: In digital video coding, the rate control scheme is essential to regulate the output data rate and maintain the output quality. The control scheme defined in MPEG Test Model 5 provides a solution with very light computational overhead, but the results are not guaranteed to be good for all video sources and channel rates without resorting to a manual "tweaking" of the control parameters, with many trial-and-error encoding tests. In this paper, we propose a rate-distortion based control scheme where we use pre-filtering and block classification to achieve higher quality at low rates. In particular, we show how to efficiently include the pre-filtering parameters as part of the rate control optimization process. Our coded sequences are compatible with standard MPEG decoders and our method is suitable for channel rates lower than those normally used for MPEG-1 sequences (around 1 Mbps), but which may be more appropriate for Internet applications. In addition, our algorithm is generic and can be readily be extended for H.263/MPEG-4 encoders. Our results show significant reductions of the blockiness usually encountered at low rates, when compared to schemes such as TM5.

01 Jan 1997
TL;DR: In this article, a rate control algorithm based on dynamic programming combined with Automatic Repeat reQuest (ARQ) was proposed to improve the reliability of wireless video transmission over wireless links.
Abstract: Video transmission over wireless links is an emerging application which involves a time-varying channel. Compared to other transmission media, wireless links suffer from limited bandwidth, and are more likely to see theirperformance degrade due to multipath fading. Therefore error control mechanisms, which can achieve better video quality with the available bandwidth and recover from the errors due to the channel degradation, are very important in wireless video transmission systems. Many of the proposed wireless communications systems are likely to be two-way so that a return channel can convey information to the transmitter about the channel state. Recent research has considered ways of improving the transmission reliability by making use of the feedback channel for "closed-loop" error control, including various forms of retransmission. In this paper we propose a rate control algorithm based on dynamic programming combined with Automatic Repeat reQuest (ARQ) as the error control mechanism. We formalize the constraints imposed by the real-time characteristics of video. We show how when an appropriate model for the channel is available, the overall robustness of the system can be improved through rate control at the source using the channel state information conveyed by the ARQ acknowledgement.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: A hybrid ARQ with a selective combining scheme using rate-compatible punctured convolutional (RCPC) codes on Rayleigh fading channels is proposed and it is shown that the scheme performs better than the generalized type-II ARQ for bursty error channels.
Abstract: We investigate hybrid ARQ schemes for low-bit-rate video transmission over wireless channels. We propose and analyze a hybrid ARQ with a selective combining scheme using rate-compatible punctured convolutional (RCPC) codes on Rayleigh fading channels. It is shown that our scheme performs better than the generalized type-II ARQ for bursty error channels. Simulations of a real-time video TDMA transmission system are also performed. A better video quality can be obtained by our proposed scheme with a bounded delay. Analytical results of the throughput and packet error rate are compared to the simulation results. Our analysis based on a finite-state Markov channel model is shown to give very good agreement with simulations.

01 Feb 1997
TL;DR: Evaluated the incremental validity of prior video game experience over that of general aptitude as a predictor of work sample test scores and suggested that those persons with videogame experience were more efficient at hand-offs and routing aircraft.
Abstract: : The FAA is currently using the Air Traffic Scenario Test (ATST) as a major portion of its selection process. Because the ATST is a PC based application with a strong resemblance to a video game, concern has been raised that prior video game experience might have a moderating effect on scores. Much of the previous research in this area is associated with topics such as the moderating effects of prior computer experience on scores earned on computerized versions of traditional achievement or power tests, and the effects of practice on video games on individual difference tests for constructs such as spatial ability. The effects of computer or video game experience on work sample scores have not been systematically investigated. The purpose of this study was to evaluate the incremental validity of prior video game experience over that of general aptitude as a predictor of work sample test scores. The Computer Use Survey was administered to 404 air traffic control students who entered the FAA ATCS Nonradar Screen. The resultant responses from this survey related to video games were summed and averaged to create the predictor (VIDEO). Three criterion measures derived from the ATST, (ATSAFE, ARVDELAY, HNDDELAY) were regressed on the cognitive aptitude measure that serves as the initial selection screening test and the predictor (VIDEO). Self-reported experience on video games was found to be significantly related to ARVDELAY and HNDDELAY, accounting for an additional 3.6% of the variance in ARVDELAY, and accounting for an additional 9% of the variance in HNDDELAY. The results suggested that those persons with video game experience were more efficient at hand-offs and routing aircraft. Future research is recommended to investigate the effect of prior video game experience on learning curves and strategies used in the work sample test.

Proceedings ArticleDOI
20 Aug 1997
TL;DR: This work proposes a dynamic algorithm allowing one to select the most suitable error-loss concealment scheme by taking into account the characteristics of the video sequence, and shows the feasibility of developing algorithms that are able to dynamically select the best error- loss concealment schemes.
Abstract: In the process of transmission of digital-encoded video bitstreams over an ATM network, cells are inevitably exposed to delays and losses due to the dynamic resource allocation mechanisms used in these networks. Error-loss concealment schemes have been proposed to limit the impact of such impairments on the video quality. The results obtained by these algorithms very much depend on the characteristics of the video sequences being transmitted. We propose a dynamic algorithm allowing one to select the most suitable error-loss concealment scheme by taking into account the characteristics of the video sequence. Our results show the feasibility of developing algorithms that are able to dynamically select the best error-loss concealment schemes as well as the improvements in the image quality in the presence of losses.

Proceedings ArticleDOI
03 Jun 1997
TL;DR: An adaptive mechanism for video sequence analysis in the transform domain is proposed, based on which a video scene can be consistently categorised into identifiable classes and methods for measuring motion complexity and activity levels in the scene.
Abstract: An adaptive mechanism for video sequence analysis in the transform domain is proposed. Methods for measuring motion complexity and activity levels in the scene are presented, based on which a video scene can be consistently categorised into identifiable classes. Further, the video quality, as measured by the mean square error or the signal to noise ratio is related to certain parameters used in video analysis. Adaptability in the analysis is realised by tailoring the parameters of the video analysis algorithm to the specific characteristics of the video scene, as embodied in the video scene class and the video quality. Preliminary experiments show the feasibility of the proposal, and the improved performance that can be achieved by adaptively tuning the analysis parameters to the characteristics of the particular scene under consideration.

Proceedings ArticleDOI
03 Jun 1997
TL;DR: It is shown that it is possible to measure quality of video sequences continuously in a consistent way and the results of continuous assessment give the possibility to select relevant material for further analysis, for instance by standard ITU/R methods.
Abstract: For the optimization of digital imaging systems it is crucial to known how parameter settings affect the perceptual quality of displayed images. This calls for valid techniques for assessing image quality. Here, we studied continuous assessment of the instantaneous quality impression of long image sequences. Initially, we concentrated on the measuring method. Subjects were instructed to indicate quality by mooing a slider along a graphical scale. With a sequence consisting of time-variably blurred stills the temporal characteristics of continuous scaling could be separated from the relation between blur and quality impression. The temporal behavior can be explained by a causal linear time-filter. Subsequently, we extended the method to real video. In order to check the validity of continuous scaling, perceived quality of the video at any moment in time was measured by partitioning the video in short fragments and evaluating the quality of each fragment separately. The image material was MPEG-2 coded at 2 Mbit/s. The relation between the time-quality curves from the continuous assessment and the instantaneous ratings of the fragments is described by the same time-filter as found previously. This filter indicates a delay of 1 second, and suggests that subjects can monitor image quality variations almost instantaneously. With these experiments, we have shown that it is possible to measure quality of video sequences continuously in a consistent way. As confirmed in a third experiment, the results of continuous assessment give the possibility to select relevant material for further analysis, for instance by standard ITU/R methods.