scispace - formally typeset
Topic

Quarter-pixel motion

About: Quarter-pixel motion is a(n) research topic. Over the lifetime, 8419 publication(s) have been published within this topic receiving 146232 citation(s).

...read more

Papers
  More

Open accessProceedings ArticleDOI: 10.1109/ICCV.2013.441
Heng Wang1, Cordelia Schmid1Institutions (1)
01 Dec 2013-
Abstract: Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.

...read more

  • Figure 5. From left to right, example frames from (a) Hollywood2, (b) HMDB51, (c) Olympic Sports and (d) UCF50.
    Figure 5. From left to right, example frames from (a) Hollywood2, (b) HMDB51, (c) Olympic Sports and (d) UCF50.
  • Figure 1. First row: images of two consecutive frames overlaid; second row: optical flow [8] between the two frames; third row: optical flow after removing camera motion; last row: trajectories removed due to camera motion in white.
    Figure 1. First row: images of two consecutive frames overlaid; second row: optical flow [8] between the two frames; third row: optical flow after removing camera motion; last row: trajectories removed due to camera motion in white.
  • Table 1. Comparison of the baseline with our method and two intermediate results using FV encoding. “WarpFlow”: computing motion descriptors (i.e., Trajectory, HOF and MBH) using warped optical flow, while keep all the trajectories; “RmTrack”: removing background trajectories, but computing motion descriptors using the original flow field; “Combined”: removing background trajectories, and computing Trajectory, HOF and MBH with warped optical flow.
    Table 1. Comparison of the baseline with our method and two intermediate results using FV encoding. “WarpFlow”: computing motion descriptors (i.e., Trajectory, HOF and MBH) using warped optical flow, while keep all the trajectories; “RmTrack”: removing background trajectories, but computing motion descriptors using the original flow field; “Combined”: removing background trajectories, and computing Trajectory, HOF and MBH with warped optical flow.
  • Table 2. Comparison of feature encoding with bag of features and Fisher vector. “DTF” stands for the original dense trajectory features [40] with RootSIFT normalization, whereas “ITF” are our improved trajectory features.
    Table 2. Comparison of feature encoding with bag of features and Fisher vector. “DTF” stands for the original dense trajectory features [40] with RootSIFT normalization, whereas “ITF” are our improved trajectory features.
  • Figure 3. Examples of removed trajectories under various camera motions, e.g., pan, zoom, tilt. White trajectories are considered due to camera motion. The red dots are the trajectory positions in the current frame. The last row shows two failure cases. The left one is due to severe motion blur. The right one fits the homography to the moving humans as they dominate the frame.
    Figure 3. Examples of removed trajectories under various camera motions, e.g., pan, zoom, tilt. White trajectories are considered due to camera motion. The red dots are the trajectory positions in the current frame. The last row shows two failure cases. The left one is due to severe motion blur. The right one fits the homography to the moving humans as they dominate the frame.
  • + 4

Topics: Motion estimation (67%), Motion field (65%), Match moving (64%) ...read more

3,063 Citations


Open accessJournal ArticleDOI: 10.1109/TCOM.1981.1094950
Abstract: A new technique for estimating interframe displacement of small blocks with minimum mean square error is presented. An efficient algorithm for searching the direction of displacement has been described. The results of applying the technique to two sets of images are presented which show 8-10 dB improvement in interframe variance reduction due to motion compensation. The motion compensation is applied for analysis and design of a hybrid coding scheme and the results show a factor of two gain at low bit rates.

...read more

Topics: Motion compensation (57%), Quarter-pixel motion (57%), Motion estimation (56%) ...read more

1,867 Citations


Open accessProceedings Article
01 Jan 2004-
Abstract: We present a system that estimates the motion of a stereo head or a single moving camera based on video input. The system operates in real-time with low delay and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates what we call visual odometry, i.e. motion estimates from visual input alone. No prior knowledge of the scene nor the motion is necessary. The visual odometry can also be used in conjunction with information from other sources such as GPS, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive and handheld platforms. We focus on results with an autonomous ground vehicle. We give examples of camera trajectories estimated purely from images over previously unseen distances and periods of time.

...read more

Topics: Odometry (68%), Visual odometry (68%), Motion estimation (63%) ...read more

1,657 Citations


Journal ArticleDOI: 10.1109/76.499840
Lai-Man Po1, Wing-Chung Ma1Institutions (1)
Abstract: Based on the real world image sequence's characteristic of center-biased motion vector distribution, a new four-step search (4SS) algorithm with center-biased checking point pattern for fast block motion estimation is proposed in this paper. A halfway-stop technique is employed in the new algorithm with searching steps of 2 to 4 and the total number of checking points is varied from 17 to 27. Simulation results show that the proposed 4SS performs better than the well-known three-step search and has similar performance to the new three-step search (N3SS) in terms of motion compensation errors. In addition, the 4SS also reduces the worst-case computational requirement from 33 to 27 search points and the average computational requirement from 21 to 19 search points, as compared with N3SS.

...read more

Topics: Beam search (66%), Search algorithm (65%), Motion estimation (63%) ...read more

1,595 Citations


Journal ArticleDOI: 10.1145/212094.212141
Steven S. Beauchemin1, John L. Barron1Institutions (1)
Abstract: Two-dimensional image motion is the projection of the three-dimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of time-orderedimages allow the estimation of projected two-dimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field. Provided that optical flow is a reliable approximation to two-dimensional image motion, it may then be used to recover the three-dimensional motion of the visual sensor (to within a scale factor) and the three-dimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the three-dimensional environment, and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, time-to-collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement. We investigate the computation of optical flow in this survey: widely known methods for estimating optical flow are classified and examined by scrutinizing the hypothesis and assumptions they use. The survey concludes with a discussion of current research issues.

...read more

Topics: Optical flow (75%), Motion field (73%), Motion estimation (70%) ...read more

1,237 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20195
201826
2017142
2016262
2015311
2014397

Top Attributes

Show by:

Topic's top 5 most impactful authors

Marta Karczewicz

45 papers, 1.6K citations

Wen Gao

40 papers, 540 citations

Wan-Chi Siu

30 papers, 205 citations

Truong Q. Nguyen

29 papers, 501 citations

Oscar C. Au

29 papers, 277 citations

Network Information
Related Topics (5)
Multiview Video Coding

9.7K papers, 198.8K citations

94% related
Discrete cosine transform

16.6K papers, 263.2K citations

94% related
Motion estimation

31.2K papers, 699K citations

93% related
Video tracking

37K papers, 735.9K citations

93% related
Transform coding

7.5K papers, 190.2K citations

92% related