scispace - formally typeset
Search or ask a question
Topic

Markov random field

About: Markov random field is a research topic. Over the lifetime, 5669 publications have been published within this topic receiving 179568 citations. The topic is also known as: MRF.


Papers
More filters
Journal ArticleDOI
TL;DR: Coding and decoding results show that, with only 8/spl sim/30% additional bandwidth over a single view bit stream, one can transmit, store, and reconstruct stereoscopic video sequences with reasonably good performance.
Abstract: The paper proposes a stereo video coding system. To ensure compatibility with monoscopic transmission, one of the view sequences is coded and transmitted conforming to the MPEG standard, referred to as the reference stream, and the other view stream is referred to as target stream. Only a few frames of the latter are coded and transmitted, while the rest are skipped and reconstructed at the decoder using a novel stereoscopic frame compensation and interpolation technique, termed SFEI BLCF. In disparity estimation, smooth and accurate disparity fields are obtained by using hierarchical Markov random field (MRF) and Gibbs random field (GRF) models. A fast search method is used to improve the precision and computation speed. Coding and decoding results show that, with only 8/spl sim/30% additional bandwidth over a single view bit stream, one can transmit, store, and reconstruct stereoscopic video sequences with reasonably good performance.

44 citations

Journal ArticleDOI
TL;DR: A novel method for co-segmenting the common foreground object from a group of video sequences as a Markov Random Field model which imposes the constraint of foreground model automatically computed or specified with little user effort is presented.
Abstract: Multiple videos may share a common foreground object, for instance a family member in home videos, or a leading role in various clips of a movie or TV series. In this paper, we present a novel method for co-segmenting the common foreground object from a group of video sequences. The issue was seldom touched on in the literature. Starting from over-segmentation of each video into Temporal Superpixels (TSPs), we first propose a new subspace clustering algorithm which segments the videos into consistent spatio-temporal regions with multiple classes, such that the common foreground has consistent labels across different videos. The subspace clustering algorithm exploits the fact that across different videos the common foreground shares similar appearance features, while motions can be used to better differentiate regions within each video, making accurate extraction of object boundaries easier. We further formulate video object co-segmentation as a Markov Random Field (MRF) model which imposes the constraint of foreground model automatically computed or specified with little user effort. The Quadratic Pseudo-Boolean Optimization (QPBO) is used to generate the results. Experiments show that this video co-segmentation framework can achieve good quality foreground extraction results without user interaction for those videos with unrelated background, and with only moderate user interaction for those videos with similar background. Comparisons with previous work also show the superiority of our approach.

44 citations

Proceedings ArticleDOI
06 Nov 2011
TL;DR: A novel photometric calibration technique allows calibration of scenes containing multiple piecewise constant chromaticities, and a likelihood term is developed linking surface normal, image intensity and photometric properties, which allows estimating the number of Chromaticities present in a scene to be framed as a model estimation problem.
Abstract: We present a multispectral photometric stereo method for capturing geometry of deforming surfaces. A novel photometric calibration technique allows calibration of scenes containing multiple piecewise constant chromaticities. This method estimates per-pixel photometric properties, then uses a RANSAC-based approach to estimate the dominant chromaticities in the scene. A likelihood term is developed linking surface normal, image intensity and photometric properties, which allows estimating the number of chromaticities present in a scene to be framed as a model estimation problem. The Bayesian Information Criterion is applied to automatically estimate the number of chromaticities present during calibration. A two-camera stereo system provides low resolution geometry, allowing the likelihood term to be used in segmenting new images into regions of constant chromaticity. This segmentation is carried out in a Markov Random Field framework and allows the correct photometric properties to be used at each pixel to estimate a dense normal map. Results are shown on several challenging real-world sequences, demonstrating state-of-the-art results using only two cameras and three light sources. Quantitative evaluation is provided against synthetic ground truth data.

44 citations

Journal ArticleDOI
TL;DR: A pixel-level statistical estimation model for statistical image segmentation using the CNN UM architecture and the Modified Metropolis Dynamics (MMD) method, which can be implemented into the raw analog architecture of the CNN.
Abstract: Markovian approaches to early vision processes need a huge amount of computing power. These algorithms can usually be implemented on parallel computing structures. With the Cellular Neural Networks (CNN), a new image processing tool is coming into consideration. Its VLSI implementation takes place on a single analog chip containing several thousands of cells. Herein we use the CNN UM architecture for statistical image segmentation. The Modified Metropolis Dynamics (MMD) method can be implemented into the raw analog architecture of the CNN. We are able to implement a (pseudo) random field generator using one layer (one memory/cell) of the CNN. We can introduce the whole pseudostochastic segmentation process in the CNN architecture using 8 memories/cell. We use simple arithmetic functions (addition, multiplication), equality-test between neighboring pixels and very simple nonlinear output functions (step, jigsaw). With this architecture, a real VLSI CNN chip can execute a pseudostochastic relaxation algorithm of about 100 iterations in about 1 ms. In the proposed solution the segmentation is unsupervised. We have developed a pixel-level statistical estimation model. The CNN turns the original image into a smooth one. Then we have two gray-level values for every pixel: the original and the smoothed one. These two values are used for estimating the probability distribution of region label at a given pixel. Using the conventional first-order Markov Random Field (MRF) model, some misclassification errors remained at the region boundaries, because of the estimation difficulties in case of low SNR. By using a greater neighborhood, this problem has been avoided. In our CNN experiments, we used a simulation system with a fixed-point integer precision of 16 bits. Our results show that even in the case of the very constrained conditions of value-representations (the interval is (-64,+64), the accuracy is 0.002) can result in an effective and acceptable segmentation.

44 citations

Journal ArticleDOI
TL;DR: A method based on Markov Chain Monte Carlo (MCMC) is proposed to estimate MRF parameters and Pseudo-likelihood is used to represent likelihood function and it gives a good estimation result.

44 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
94% related
Convolutional neural network
74.7K papers, 2M citations
93% related
Feature extraction
111.8K papers, 2.1M citations
92% related
Image processing
229.9K papers, 3.5M citations
91% related
Deep learning
79.8K papers, 2.1M citations
91% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
202330
2022128
202196
2020173
2019204