Topic
Video quality
About: Video quality is a research topic. Over the lifetime, 13143 publications have been published within this topic receiving 178307 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A Video SAR (Synthetic Aperture Radar) mode that provides a persistent view of a scene centered at the Motion Compensation Point (MCP) and Generation of synthetic targets with linear motion including both constant velocity and constant acceleration is described.
Abstract: This paper details a Video SAR (Synthetic Aperture Radar) mode that provides a persistent view of a scene centered at
the Motion Compensation Point (MCP). The radar platform follows a circular flight path. An objective is to form a
sequence of SAR images while observing dynamic scene changes at a selectable video frame rate. A formulation of
backprojection meets this objective. Modified backprojection equations take into account changes in the grazing angle
or squint angle that result from non-ideal flight paths.
The algorithm forms a new video frame relying upon much of the signal processing performed in prior frames. The
method described applies an appropriate azimuth window to each video frame for window sidelobe rejection.
A Cardinal Direction Up (CDU) coordinate frame forms images with the top of the image oriented along a given
cardinal direction for all video frames. Using this coordinate frame helps characterize a moving target’s target response.
Generation of synthetic targets with linear motion including both constant velocity and constant acceleration is
described. The synthetic target video imagery demonstrates dynamic SAR imagery with expected moving target
responses. The paper presents 2011 flight data collected by General Atomics Aeronautical Systems, Inc. (GA-ASI)
implementing the video SAR mode. The flight data demonstrates good video quality showing moving vehicles.
The flight imagery demonstrates the real-time capability of the video SAR mode. The video SAR mode uses a circular
shift register of subapertures. The radar employs a Graphics Processing Unit (GPU) in order to implement this
algorithm.
55 citations
•
24 Feb 2004TL;DR: In this paper, a fine granularity scalable (FGS) codec has an encoder and a decoder configurable in three prediction modes: coarse prediction, fine prediction and mix prediction.
Abstract: An architecture of a fine granularity scalable (FGS) codec has an encoder and a decoder configurable in three prediction modes. The coarse prediction loop in the base layer of the encoder has a switch for selecting either coarse prediction output or fine prediction output in the encoder. The fine prediction loop in the enhancement layer of the encoder also has a switch for selecting either coarse prediction output or fine prediction output. Two-pass encoding is used in the encoder. The first pass extracts coding parameters and classifies macroblocks of a video frame into three groups each being assigned with all-coarse prediction mode, all-fine prediction mode or mix prediction. The second pass uses the assigned modes to encode the macroblocks. A rate adaptation algorithm is provided to truncate the enhancement bit-planes for low bit rate, medium bit rate and high bit rate and allocate bit efficiently for achieving higher video quality.
55 citations
••
TL;DR: The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else while substantially reducing the overall bandwidth requirements for supporting VR video experiences.
Abstract: This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate. A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth. A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.
55 citations
••
TL;DR: A low-complexity adaptive motion-based unequal error protection (UEP) video coding and transmission system which efficiently combines three existing error-resilience techniques by exploiting knowledge of the source material as well as the channel operating conditions is proposed.
Abstract: In this work, we consider the delivery of digital video over future 3G wireless IP networks and we propose a low-complexity adaptive motion-based unequal error protection (UEP) video coding and transmission system which efficiently combines three existing error-resilience techniques by exploiting knowledge of the source material as well as the channel operating conditions. Given this information, the proposed system can adaptively adjust the operating parameters of the video source encoder and the forward error correction (FEC) channel encoder to maximize the delivered video quality based upon both application-layer video motion estimates and link-layer channel estimates. We demonstrate the efficacy of this approach using the ITU-T H.264 video source coder. The results indicate that a significant performance improvement can be achieved with enhanced resilience to inaccurate channel feedback information and with substantially reduced computational complexity compared to competing approaches
55 citations
••
30 Oct 2000TL;DR: A new approach to solve the difficulty of video similarity, considering all the factors existing in human vision perception, and introduces a new comparison algorithm based on multi-granularity of video structure, which has great flexibility.
Abstract: The most commonly used method for content-based video retrieval is query by example. But the definition of video similarity brings great obstacle to further research. This paper puts forward a new approach to solve the difficulty. Firstly, it advances centroid feature vector of shot in order to reduce the storage of video database. Secondly, considering all the factors existing in human vision perception, it introduces a new comparison algorithm based on multi-granularity of video structure, which has great flexibility. Thirdly, after getting the similar video set, we take a brand-new method of feedback to adjust weight based on video similarity model. In this way, retrieval result can be optimized greatly.
55 citations