scispace - formally typeset
Topic

Match moving

About: Match moving is a(n) research topic. Over the lifetime, 4520 publication(s) have been published within this topic receiving 91454 citation(s). The topic is also known as: Match Move.

...read more

Papers
  More

Journal ArticleDOI: 10.1109/34.868677
Chris Stauffer1, W.E.L. Grimson1Institutions (1)
Abstract: Our goal is to develop a visual monitoring system that passively observes moving objects in a site and learns patterns of activity from those observations. For extended sites, the system will require multiple cameras. Thus, key elements of the system are motion tracking, camera coordination, activity classification, and event detection. In this paper, we focus on motion tracking and show how one can use observed motion to learn patterns of activity in a site. Motion segmentation is based on an adaptive background subtraction method that models each pixel as a mixture of Gaussians and uses an online approximation to update the model. The Gaussian distributions are then evaluated to determine which are most likely to result from a background process. This yields a stable, real-time outdoor tracker that reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. While a tracking system is unaware of the identity of any object it tracks, the identity remains the same for the entire tracking sequence. Our system leverages this information by accumulating joint co-occurrences of the representations within a sequence. These joint co-occurrence statistics are then used to create a hierarchical binary-tree classification of the representations. This method is useful for classifying sequences, as well as individual instances of activities in a site.

...read more

Topics: Match moving (59%), Background subtraction (58%), Tracking system (55%) ...read more

3,562 Citations


Open accessProceedings ArticleDOI: 10.1109/ICCV.2013.441
Heng Wang1, Cordelia Schmid1Institutions (1)
01 Dec 2013-
Abstract: Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.

...read more

  • Figure 5. From left to right, example frames from (a) Hollywood2, (b) HMDB51, (c) Olympic Sports and (d) UCF50.
    Figure 5. From left to right, example frames from (a) Hollywood2, (b) HMDB51, (c) Olympic Sports and (d) UCF50.
  • Figure 1. First row: images of two consecutive frames overlaid; second row: optical flow [8] between the two frames; third row: optical flow after removing camera motion; last row: trajectories removed due to camera motion in white.
    Figure 1. First row: images of two consecutive frames overlaid; second row: optical flow [8] between the two frames; third row: optical flow after removing camera motion; last row: trajectories removed due to camera motion in white.
  • Table 1. Comparison of the baseline with our method and two intermediate results using FV encoding. “WarpFlow”: computing motion descriptors (i.e., Trajectory, HOF and MBH) using warped optical flow, while keep all the trajectories; “RmTrack”: removing background trajectories, but computing motion descriptors using the original flow field; “Combined”: removing background trajectories, and computing Trajectory, HOF and MBH with warped optical flow.
    Table 1. Comparison of the baseline with our method and two intermediate results using FV encoding. “WarpFlow”: computing motion descriptors (i.e., Trajectory, HOF and MBH) using warped optical flow, while keep all the trajectories; “RmTrack”: removing background trajectories, but computing motion descriptors using the original flow field; “Combined”: removing background trajectories, and computing Trajectory, HOF and MBH with warped optical flow.
  • Table 2. Comparison of feature encoding with bag of features and Fisher vector. “DTF” stands for the original dense trajectory features [40] with RootSIFT normalization, whereas “ITF” are our improved trajectory features.
    Table 2. Comparison of feature encoding with bag of features and Fisher vector. “DTF” stands for the original dense trajectory features [40] with RootSIFT normalization, whereas “ITF” are our improved trajectory features.
  • Figure 3. Examples of removed trajectories under various camera motions, e.g., pan, zoom, tilt. White trajectories are considered due to camera motion. The red dots are the trajectory positions in the current frame. The last row shows two failure cases. The left one is due to severe motion blur. The right one fits the homography to the moving humans as they dominate the frame.
    Figure 3. Examples of removed trajectories under various camera motions, e.g., pan, zoom, tilt. White trajectories are considered due to camera motion. The red dots are the trajectory positions in the current frame. The last row shows two failure cases. The left one is due to severe motion blur. The right one fits the homography to the moving humans as they dominate the frame.
  • + 4

Topics: Motion estimation (67%), Motion field (65%), Match moving (64%) ...read more

3,063 Citations


Proceedings ArticleDOI: 10.1109/NAMW.1997.609859
Jake K. Aggarwal, Qin Cai1Institutions (1)
16 Jun 1997-
Abstract: Human motion analysis is receiving increasing attention from computer vision researchers. This interest is motivated by a wide spectrum of applications, such as athletic performance analysis, surveillance, man-machine interfaces, content-based image storage and retrieval, and video conferencing. The paper gives an overview of the various tasks involved in motion analysis of the human body. The authors focus on three major areas related to interpreting human motion: 1) motion analysis involving human body parts, 2) tracking of human motion using single or multiple cameras, and 3) recognizing human activities from image sequences. Motion analysis of human body parts involves the low-level segmentation of the human body into segments connected by joints, and recovers the 3D structure of the human body using its 2D projections over a sequence of images. Tracking human motion using a single or multiple camera focuses on higher-level processing, in which moving humans are tracked without identifying specific parts of the body structure. After successfully matching the moving human image from one frame to another in image sequences, understanding the human movements or activities comes naturally, which leads to a discussion of recognizing human activities. The review is illustrated by examples.

...read more

Topics: Motion History Images (66%), Motion estimation (63%), Structure from motion (62%) ...read more

1,663 Citations


Journal ArticleDOI: 10.1006/CVIU.1998.0744
Jake K. Aggarwal1, Qin Cai1Institutions (1)
Abstract: Human motion analysis is receiving increasing attention from computer vision researchers. This interest is motivated by a wide spectrum of applications, such as athletic performance analysis, surveillance, man?machine interfaces, content-based image storage and retrieval, and video conferencing. This paper gives an overview of the various tasks involved in motion analysis of the human body. We focus on three major areas related to interpreting human motion: (1) motion analysis involving human body parts, (2) tracking a moving human from a single view or multiple camera perspectives, and (3) recognizing human activities from image sequences. Motion analysis of human body parts involves the low-level segmentation of the human body into segments connected by joints and recovers the 3D structure of the human body using its 2D projections over a sequence of images. Tracking human motion from a single view or multiple perspectives focuses on higher-level processing, in which moving humans are tracked without identifying their body parts. After successfully matching the moving human image from one frame to another in an image sequence, understanding the human movements or activities comes naturally, which leads to our discussion of recognizing human activities.

...read more

Topics: Motion analysis (61%), Motion estimation (61%), Match moving (61%) ...read more

1,564 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2000.854758
Jonathan Deutscher1, Andrew Blake2, Ian Reid1Institutions (2)
15 Jun 2000-
Abstract: The main challenge in articulated body motion tracking is the large number of degrees of freedom (around 30) to be recovered. Search algorithms, either deterministic or stochastic, that search such a space without constraint, fall foul of exponential computational complexity. One approach is to introduce constraints: either labelling using markers or colour coding, prior assumptions about motion trajectories or view restrictions. Another is to relax constraints arising from articulation, and track limbs as if their motions were independent. In contrast, we aim for general tracking without special preparation of objects or restrictive assumptions. The principal contribution of the paper is the development of a modified particle filter for search in high dimensional configuration spaces. It uses a continuation principle based on annealing to introduce the influence of narrow peaks in the fitness function, gradually. The new algorithm, termed annealed particle filtering, is shown to be capable of recovering full articulated body motion efficiently.

...read more

  • Figure 6: Configurations of the pixel map sampling points pi(X;Z) for the edge based measurements (a) and the foreground segmentation measurements (b). The sampling points for the edge measurements are located along the occluding contours of the model’s conical sections that have been projected into the image. The sampling points for the foreground segmentation measurements are taken from a grid within these occluding contours.
    Figure 6: Configurations of the pixel map sampling points pi(X;Z) for the edge based measurements (a) and the foreground segmentation measurements (b). The sampling points for the edge measurements are located along the occluding contours of the model’s conical sections that have been projected into the image. The sampling points for the foreground segmentation measurements are taken from a grid within these occluding contours.
  • Figure 1:Illustration of the annealed particle filter with M = 1.
    Figure 1:Illustration of the annealed particle filter with M = 1.
  • Figure 3:Annealed particle filter in progress. The setsSk;m are plotted here, taken while tracking the walking person as seen in figure 9. Only the horizontal translation componentsx0 andx1 of the model configuration vectorX are shown. Starting withSk 1;0 from the previous time step the particles are diffused to formSk;9 which easily covers the expected range of translational movement of the subject. The particles and are then slowly annealed over 10 layers (the setsSk;6 toSk;4 are omitted for brevity) to produceSk;0 which is clustered around the maximum of the weighting function.
    Figure 3:Annealed particle filter in progress. The setsSk;m are plotted here, taken while tracking the walking person as seen in figure 9. Only the horizontal translation componentsx0 andx1 of the model configuration vectorX are shown. Starting withSk 1;0 from the previous time step the particles are diffused to formSk;9 which easily covers the expected range of translational movement of the subject. The particles and are then slowly annealed over 10 layers (the setsSk;6 toSk;4 are omitted for brevity) to produceSk;0 which is clustered around the maximum of the weighting function.
Topics: Motion estimation (57%), Match moving (55%), Search algorithm (53%) ...read more

1,028 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
2021142
2020200
2019208
2018198
2017223

Top Attributes

Show by:

Topic's top 5 most impactful authors

Andre Kyme

20 papers, 225 citations

Roger Fulton

17 papers, 222 citations

Bijan Shirinzadeh

12 papers, 457 citations

Steven R. Meikle

12 papers, 188 citations

Michael A. King

12 papers, 231 citations

Network Information
Related Topics (5)
Motion estimation

31.2K papers, 699K citations

89% related
Histogram

21K papers, 356.6K citations

88% related
Image registration

20.4K papers, 480.7K citations

87% related
Optical flow

13.1K papers, 371.5K citations

87% related
Iterative reconstruction

41.2K papers, 841.1K citations

87% related