scispace - formally typeset
Search or ask a question
Topic

Block-matching algorithm

About: Block-matching algorithm is a research topic. Over the lifetime, 9590 publications have been published within this topic receiving 165336 citations.


Papers
More filters
Patent
05 Feb 1997
TL;DR: In this paper, a method to provide automatic content-based video indexing from object motion is described, where objects are tracked through segmented data in an object tracker and a symbolic representation of the video is generated in the form of an annotated graphics describing the objects and their movement.
Abstract: A method to provide automatic content-based video indexing from object motion is described. Moving objects in video from a surveillance camera 11 detected in the video sequence using motion segmentation methods by motion segmentor 21. Objects are tracked through segmented data in an object tracker 22. A symbolic representation of the video is generated in the form of an annotated graphics describing the objects and their movement. A motion analyzer 23 analyzes results of object tracking and annotates the graph motion with indices describing several events. The graph is then indexed using a rule based classification scheme to identify events of interest such as appearance/disappearance, deposit/removal, entrance/exit, and motion/rest of objects. Clips of the video identified by spatio-temporal, event, and object-based queries are recalled to view the desired video.

602 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: Deep voxel flow as mentioned in this paper combines the advantages of optical flow and neural network-based methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which can be applied at any video resolution.
Abstract: We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-of-the-art.

601 citations

Journal ArticleDOI
27 Jul 2009
TL;DR: A technique that transforms a video from a hand-held video camera so that it appears as if it were taken with a directed camera motion, and develops algorithms that can effectively recreate dynamic scenes from a single source video.
Abstract: We describe a technique that transforms a video from a hand-held video camera so that it appears as if it were taken with a directed camera motion. Our method adjusts the video to appear as if it were taken from nearby viewpoints, allowing 3D camera movements to be simulated. By aiming only for perceptual plausibility, rather than accurate reconstruction, we are able to develop algorithms that can effectively recreate dynamic scenes from a single source video. Our technique first recovers the original 3D camera motion and a sparse set of 3D, static scene points using an off-the-shelf structure-from-motion system. Then, a desired camera path is computed either automatically (e.g., by fitting a linear or quadratic path) or interactively. Finally, our technique performs a least-squares optimization that computes a spatially-varying warp from each input video frame into an output frame. The warp is computed to both follow the sparse displacements suggested by the recovered 3D structure, and avoid deforming the content in the video frame. Our experiments on stabilizing challenging videos of dynamic scenes demonstrate the effectiveness of our technique.

536 citations

Proceedings ArticleDOI
26 Dec 2007
TL;DR: The proposed algorithm is fully automatic and based on local saliency, motion detection and object detectors, and compared to the state of the art in image retargeting.
Abstract: Video retargeting is the process of transforming an existing video to fit the dimensions of an arbitrary display. A compelling retargeting aims at preserving the viewers' experience by maintaining the information content of important regions in the frame, whilst keeping their aspect ratio. An efficient algorithm for video retargeting is introduced. It consists of two stages. First, the frame is analyzed to detect the importance of each region in the frame. Then, a transformation that respects the analysis shrinks less important regions more than important ones. Our analysis is fully automatic and based on local saliency, motion detection and object detectors. The performance of the proposed algorithm is demonstrated on a variety of video sequences, and compared to the state of the art in image retargeting.

535 citations

Patent
24 Apr 1995
TL;DR: In this article, a video method and system for automatically tracking a defined target within a viewer defined window of a video image as the target moves within the video image by selecting a target within video, producing an identification of the selected target, defining a window within the videos, and utilizing the identification to automatically maintain the target within the window of the video as the selected targets shifts within video.
Abstract: A video method and system for automatically tracking a viewer defined target within a viewer defined window of a video image as the target moves within the video image by selecting a target within a video, producing an identification of the selected target, defining a window within the video, utilizing the identification to automatically maintain the selected target within the window of the video as the selected target shifts within the video, and transmitting the window of the video.

518 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
88% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Image processing
229.9K papers, 3.5M citations
84% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202230
20219
202010
201916
201843