scispace - formally typeset
Search or ask a question
Topic

Video quality

About: Video quality is a research topic. Over the lifetime, 13143 publications have been published within this topic receiving 178307 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A network-aware and source-aware video streaming system is proposed to support interactive multiuser communications within single-cell and multicell IEEE 802.11 networks and can provide more uniform video quality for all users and lower quality fluctuation for each received video sequence.
Abstract: In this paper, a network-aware and source-aware video streaming system is proposed to support interactive multiuser communications within single-cell and multicell IEEE 802.11 networks. Unlike the traditional streaming video services, the strict delay constraints of an interactive video streaming system pose more challenges. These challenges include the heterogeneity of uplink and downlink channel conditions experienced by different users, the multiuser resource allocation of limited radio resources, the incorporation of the cross-layer design, and the diversity of content complexities exhibited by different video sequences. With the awareness of video content and network resources, the proposed system integrates cross-layer error protection mechanism and performs dynamic resource allocation across multiple users. We formulate the proposed system as to minimize the maximal end-to-end expected distortion received by all users, subject to maximal transmission power and delay constraints. To reduce the high dimensionality of the search space, fast multiuser algorithms are proposed to find the near-optimal solutions. Compared to the strategy without dynamically and jointly allocating bandwidth resource for uplinks and downlinks, the proposed framework outperforms by 2.18~7.95 dB in terms of the average received PSNR of all users and by 3.82~11.50 dB in terms of the lowest received PSNR among all users. Furthermore, the proposed scheme can provide more uniform video quality for all users and lower quality fluctuation for each received video sequence.

45 citations

Journal ArticleDOI
01 Feb 2009
TL;DR: A structural information-based image quality assessment algorithm, in which LU factorization is used for representation of the structural information of an image, which effectively replaces the peak signal to noise ratio or the mean square error.
Abstract: The goal of the objective image quality assessment is to quantitatively measure the image quality of an arbitrary image. The objective image quality measure is desirable if it is close to the subjective image quality assessment such as the mean opinion score. Image quality assessment algorithms are generally classified into two methodologies: perceptual and structural information-based. This paper proposes a structural information-based image quality assessment algorithm, in which LU factorization is used for representation of the structural information of an image. The proposed algorithm performs LU factorization of each of reference and distorted images, from which the distortion map is computed for measuring the quality of the distorted image. Finally, the proposed image quality metric is computed from the two-dimensional distortion map. Experimental results with the laboratory for image and video engineering database images show the efficiency of the proposed method, calibrated by linear and logistic regressions, in terms of the Pearson correlation coefficient and root mean square error. In commercial systems, the proposed algorithm can be used for quality assessment of mobile contents and video coding, which effectively replaces the peak signal to noise ratio or the mean square error.

45 citations

Journal ArticleDOI
TL;DR: This work proposes a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter and develops a fast algorithm based on the inherent motion information within a sliding window in the underlying video.
Abstract: Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.

45 citations

Journal ArticleDOI
TL;DR: The study of the impact of delay, jitter, packet loss, and bandwidth on Quality of Experience (QoE) and the evaluation of the relationship between content related parameters and the QoE for different levels of impairments.
Abstract: The analysis of the impact of video content and transmission impairments on Quality of Experience (QoE) is a relevant topic for the robust design and adaptation of multimedia infrastructures, services, and applications. The goal of this paper is to study the impact of video content on QoE for different levels of impairments. In more details, this contribution aims at i) the study of the impact of delay, jitter, packet loss, and bandwidth on QoE, ii) the analysis of the impact of video content on QoE, and iii) the evaluation of the relationship between content related parameters (spatial-temporal perceptual information, motion, and data rate) and the QoE for different levels of impairments.

45 citations

Proceedings ArticleDOI
TL;DR: The results show that video quality is sensitive to how layering is accomplished, and that there is an optimum layering that maximizes the quality for a given network condition, and shows that, contrary to customary belief, dropping data in B frames prior to dropped data in P or I frames is a poor layering technique.
Abstract: The current Internet is not well suited for the transmission of high quality video such as MPEG-2 because of severe quality degradation during network congestion episodes. One possible solution is the combination of layered video coding with the Differentiated Services (DiffServ) architecture; different video layers are mapped into different priority levels, and packets with different priorities receive a different dropping treatment in the network. It is expected that with layering and priority dropping, graceful degradation of video quality will be experienced during congestion episodes. We consider various layering mechanisms defined in the MPEG-2 standards; namely, temporal scalability, data partitioning (DP) and Signal to Noise Ratio (SNR) scalability. The main issue in this paper is how layers should be created to maximize perceived video quality over a given range of network conditions. Key to our study is the use of real life video sequences and a video quality measure consisting of a perceptual distortion metric based on the Human Visual System (VHS). Our results show that video quality is sensitive to how layering is accomplished, and that there is an optimum layering that maximizes the quality for a given network condition. Our results also show that layering can achieve higher network loading for a given minimum quality target than non-layered video, and can achieve graceful degradation over a wider range of network conditions. We have also seen that the wider the range of network conditions is, the higher is the number of layers required in order to remain at the highest possible quality level for each network condition. In particular, we demonstrate how three or four layers achieve better results than two layers; however, additional layers beyond four provide marginal improvement. Therefore, from a practical point of view, three or four layers are sufficient to attain most of the benefits of layering. We compare the various scalability techniques in terms of complexity and video quality. Temporal scalability, which restricts the layering to be done at frame boundaries, is the simplest to implement and introduces no overhead, but performs poorly compared to data partitioning, which allows the grouping of coefficients into layers independent of the frames they belong to. This shows that, contrary to customary belief, dropping data in B frames prior to dropping data in P or I frames is a poor layering technique. DP is much simpler to implement and introduces significantly lower overhead than SNR scalability. However, SNR scalability provides higher quality than DP when network conditions are particularly poor.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

45 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Wireless network
122.5K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023139
2022336
2021399
2020535
2019609
2018673