scispace - formally typeset
Search or ask a question
Topic

Inter frame

About: Inter frame is a research topic. Over the lifetime, 4154 publications have been published within this topic receiving 63549 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Gaussian mixture distribution method is used to eliminate the influence of moving vehicle firstly in this text, and then the background images for vehicle flow are built, combining the advantages of background difference algorithm with inter frame difference operator.
Abstract: Vehicle-flow detection and tracking by digital image are one of the most important technologies in the traffic monitoring system. Gaussian mixture distribution method is used to eliminate the influence of moving vehicle firstly in this text, and then we built the background images for vehicle flow. Combining the advantages of background difference algorithm with inter frame difference operator, the real-time background is segmented integrally and dynamically updated accurately by matching the reconstructed image with current background. In order to ensure the robustness of vehicle detection, three by three window templates are adopted to remove the isolated noise spot in the image of vehicle contour. The template structural element is used to do some graphical morphological filtering. So, the corrosion and expansion sets are obtained. To narrow the target search scope and improve the calculation speed and precision of the algorithm, Kalman filtering model is used to realize the tracking of fast moving vehicles. Experimental results show that the method has good real-time and reliable performance.

15 citations

Proceedings ArticleDOI
13 Mar 1996
TL;DR: In this paper, a video observation model is defined which incorporates temporal information via estimated interframe motion vectors, and the resulting ill-posed inverse problem is regularized through Bayesian maximum a posteriori (MAP) estimation, utilizing a discontinuity-preserving prior model for the spatial data.
Abstract: When an interlaced image sequence is viewed at the rate of sixty frames per second, the human visual system interpolates the data so that the missing fields are not noticeable. However, if frames are viewed individually, interlacing artifacts are quite prominent. This paper addresses the problem of deinterlacing image sequences for the purposes of analyzing video stills and generating high-resolution hardcopy of individual frames. Multiple interlaced frames are temporally integrated to estimate a single progressively-scanned still image, with motion compensation used between frames. A video observation model is defined which incorporates temporal information via estimated interframe motion vectors. The resulting ill- posed inverse problem is regularized through Bayesian maximum a posteriori (MAP) estimation, utilizing a discontinuity-preserving prior model for the spatial data. Progressively- scanned estimates computed from interlaced image sequences are shown at several spatial interpolation factors, since the multiframe Bayesian scan conversion algorithm is capable of simultaneously deinterlacing the data and enhancing spatial resolution. Problems encountered in the estimation of motion vectors from interlaced frames are addressed.

15 citations

Journal ArticleDOI
14 Nov 2002
TL;DR: A super-resolution imaging method suitable for imaging objects moving in a dynamic scene is described, which can take advantage of common MPEG-4 encoding tools.
Abstract: A superresolution imaging method suitable for imaging objects moving in a dynamic scene is described. The main operations are performed over three threads: the computation of a dense interframe 2D motion-field induced by the moving objects at subpixel resolution in the first thread. Concurrently, each video image frame is enlarged by cascade of an ideal low-pass filter and a higher rate sampler, essentially stretching each image onto a larger grid. Then, the main task is to synthesize a higher resolution image, from the stretched image of the first frame, and that of the subsequent frames subjected to suitable motion compensation. A simple averaging process and/or a simplified Kalman filter may be used to minimize the spatio-temporal noise, in the aggregation process. The method takes advantage of widely used MPEG-4 encoding hardware/software tools. A few experimental cases are presented with a basic description of the key operations performed in the overall process.

15 citations

Patent
24 May 2007
TL;DR: In this paper, a method of frame interpolation is proposed to arrange the first and second frames in a video stream so that the second video stream has a different higher frame rate than the input video stream.
Abstract: The invention provides a method of frame interpolation, the method comprising: receiving first and second frames from an input video stream; generating an interpolated frame for arranging between the first and second frames in a processed video stream so that the second video stream has a different higher frame rate than the input video stream, wherein generating the interpolated frame comprises: identifying one or more moving objects within the first frame; segmenting the or each of the identified moving objects; and, determining motion parameters for each of the segments of the segmented objects.

15 citations

Book ChapterDOI
01 Jan 2006
TL;DR: This chapter describes research that proposes that an optimal adaptation trajectory through the set of possible encodings exists, and indicates how to adapt transmission in response to changes in network conditions in order to maximize user-perceived quality.
Abstract: There is an increasing demand for streaming video applications over both the fixed Internet and wireless IP networks. The fluctuating bandwidth and time-varying delays of best-effort networks makes providing good quality streaming a challenge. Many adaptive video delivery mechanisms have been proposed over recent years; however, most do not explicitly consider user-perceived quality when making adaptations, nor do they define what quality is. This chapter describes research that proposes that an optimal adaptation trajectory through the set of possible encodings exists, and indicates how to adapt transmission in response to changes in network conditions in order to maximize user-perceived quality. Incorporating User Perception in Adaptive Video Streaming Systems 243 Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. Introduction Best-effort IP networks are unreliable and unpredictable, particularly in a wireless environment. There can be many factors that affect the quality of a transmission, such as delay, jitter, and loss. Congested network conditions result in lost video packets, which, as a consequence, produce poor quality video. Further, there are strict delay constraints imposed by streamed multimedia traffic. If a video packet does not arrive before its playout time, the packet is effectively lost. Packet losses have a particularly devastating effect on the smooth continuous playout of a video sequence due to interframe dependencies. A slightly degraded quality but uncorrupted video stream is less irritating to the user than a randomly-corrupted stream. However, rapidly fluctuating quality should also be avoided as the human vision system adapts to a specific quality after a few seconds, and it becomes annoying if the viewer has to adjust to a varying quality over short time scales (Ghinea, Thomas, & Fish, 1999). Controlled video quality adaptation is needed to reduce the negative effects of congestion on the stream while providing the highest possible level of service and quality. For example, consider a user watching some video clip; when the network is congested, the video server must reduce the transmitted bitrate to overcome the negative effects of congestion. In order to reduce the bitrate of the video stream, the quality of the video stream must be reduced by sacrificing some aspect of the video quality. There are a number of ways in which the quality can be adapted; for example, the image resolution (i.e. the amount of detail in the video image), the frame rate (i.e. the continuity of motion), or a combination of both can be adapted. The choice of which aspect of the video quality should depend on how the quality reduction will be perceived. In the past few years, there has been much work on video quality adaptation and video quality evaluation. In general, video quality adaptation indicates how the bit rate of the video should be adjusted in response to changing network conditions. However, this is not addressed in terms of video quality, as for a given bit rate budget there are many ways in which the video quality can be adapted. Video quality evaluation measures the quality of video as perceived by the users, but current evaluation approaches are not designed for adaptive video streaming transmissions. This chapter will firstly provide a generalized overview of adaptive multimedia systems and describe recent systems that use end-user perception as part of the adaptation process. Many of these adaptive systems rely on objective metrics to calculate the userperceived quality. Several objective metrics of video quality have been developed, but they are limited and not satisfactory in quantifying human perception. Further, it can be argued that to date, objective metrics were not designed to assess the quality of an adapting video stream. As a case study, the discussion will focus on recent research that demonstrates how user-perceived quality can be used as part of the adaptation process for multimedia. In this work, the concept of an Optimal Adaptation Trajectory (OAT) has been proposed. The OAT indicates how to adapt multimedia in response to changes in network conditions to maximize user-perceived quality. Finally experimental subjective testing results are presented that demonstrate the dynamic nature of user-perception with adapting multimedia. The results illustrate that using a two-dimensional adaptation strategy based on the OAT out-performs one-dimensional adaptation schemes, giving better short-term and long-term user-perceived quality. 244 Cranley & Murphy Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. Review of Adaptive Multimedia Systems Given the seriousness of congestion on the smooth continuous play-out of multimedia, there is a strong need for adaptation. The primary goals of adapting multimedia are to ensure graceful quality adaptation, maintain a smooth continuous play-out and maximize the user-perceived quality. Multimedia servers should be able to intelligently adapt the video quality to match the available resources in the network. There are a number of key features that need to be considered in the development of an adaptive streaming system (Wang & Schulzrinne, 1999) such as feedback to relay the state of the network between client and server, the frequency of this feedback, the adaptation algorithm used, the sensitivity of the algorithm to feedback, and the resulting user-perceived quality. However, the most important thing is how the system reacts, adapts to congestion, and the perceived quality that results from this adaptation. Adaptation Techniques Broadly speaking, adaptation techniques attempt to reduce network congestion by matching the rate of the multimedia stream to the available network bandwidth. Without some sort of rate control, any data transmitted exceeding the available bandwidth would be discarded, lost, or corrupted in the network. Adaptation techniques can be classified into the following generalized categories: rate control, rate shaping, and rate adaptive encoding (Figure 1). Each of these techniques adapts the transmitted video stream to match the available resources in the network by either adapting the rate at which packets are sent or adjusting the quality of the delivered video (Wu, Hou, Zhu, Lee, Chiang, Zhang, & Chao, 2000, 2002). These are briefly described in the following sections. Figure 1. Adaptation techniques Incorporating User Perception in Adaptive Video Streaming Systems 245 Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. Rate Control Rate control is the most commonly-used mechanism employed in adaptive multimedia systems. Rate control can be implemented either at the server, the client, or a hybrid scheme whereby the client and server cooperate to achieve rate control. • Sender-based rate control: On receipt of feedback from the client, the server adapts the transmission rate of the multimedia stream being transmitted in order to minimize the levels of packet loss at the client by matching the transmission rate of the multimedia stream to the available network bandwidth. Without any rate control, the data transmitted exceeding the available bandwidth would be discarded in the network. • Receiver-based rate control: The clients control the receiving rate of video streams by adding/dropping layers. In layered multicast, the video sequence is compressed into multiple layers: a base layer and one or more enhancement layers. The base layer can be independently decoded and provides basic video quality; the enhancement layers can only be decoded together with the base layer, and they enhance the quality of the base layer. • Hybrid rate control: This consists of rate control at both the sender and receiver. The hybrid rate control is targeted at multicast video and is applicable to both layered video and non-layered video. Typically, clients regulate the receiving rate of video streams by adding or dropping layers while the sender also adjusts the transmission rate of each layer based on feedback information from the receivers. Unlike server-based schemes, the server uses multiple layers, and the rate of each layer may vary due to the hybrid approach of adapting both at the server and receiver.

15 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Image processing
229.9K papers, 3.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202272
202162
202084
2019110
201897