scispace - formally typeset
Search or ask a question

Showing papers on "Motion blur published in 2011"


Proceedings ArticleDOI
20 Jun 2011
TL;DR: An efficient patch-based face image quality assessment algorithm which quantifies the similarity of a face image to a probabilistic face model, representing an ‘ideal’ face is proposed.
Abstract: In video based face recognition, face images are typically captured over multiple frames in uncontrolled conditions, where head pose, illumination, shadowing, motion blur and focus change over the sequence. Additionally, inaccuracies in face localisation can also introduce scale and alignment variations. Using all face images, including images of poor quality, can actually degrade face recognition performance. While one solution it to use only the ‘best’ of images, current face selection techniques are incapable of simultaneously handling all of the abovementioned issues. We propose an efficient patch-based face image quality assessment algorithm which quantifies the similarity of a face image to a probabilistic face model, representing an ‘ideal’ face. Image characteristics that affect recognition are taken into account, including variations in geometric alignment (shift, rotation and scale), sharpness, head pose and cast shadows. Experiments on FERET and PIE datasets show that the proposed algorithm is able to identify images which are simultaneously the most frontal, aligned, sharp and well illuminated. Further experiments on a new video surveillance dataset (termed ChokePoint) show that the proposed method provides better face subsets than existing face selection techniques, leading to significant improvements in recognition accuracy.

314 citations


Proceedings ArticleDOI
25 Jul 2011
TL;DR: A robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching is introduced, in contrast to previous sequential methods.
Abstract: We present a new technique for passive and markerless facial performance capture based on anchor frames. Our method starts with high resolution per-frame geometry acquisition using state-of-the-art stereo reconstruction, and proceeds to establish a single triangle mesh that is propagated through the entire performance. Leveraging the fact that facial performances often contain repetitive subsequences, we identify anchor frames as those which contain similar facial expressions to a manually chosen reference expression. Anchor frames are automatically computed over one or even multiple performances. We introduce a robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching. This allows us to propagate one reconstructed frame to an entire sequence in parallel, in contrast to previous sequential methods. Our anchored reconstruction approach also limits tracker drift and robustly handles occlusions and motion blur. The parallel tracking and mesh propagation offer low computation times. Our technique will even automatically match anchor frames across different sequences captured on different occasions, propagating a single mesh to all performances.

288 citations


Journal ArticleDOI
TL;DR: This paper discusses how the blurred image can be modeled as an integration of the clear scene under a sequence of planar projective transformations that describe the camera's path, and describes how to modify the Richardson-Lucy algorithm to incorporate this new blur model.
Abstract: This paper addresses how to model and correct image blur that arises when a camera undergoes ego motion while observing a distant scene. In particular, we discuss how the blurred image can be modeled as an integration of the clear scene under a sequence of planar projective transformations (i.e., homographies) that describe the camera's path. This projective motion path blur model is more effective at modeling the spatially varying motion blur exhibited by ego motion than conventional methods based on space-invariant blur kernels. To correct the blurred image, we describe how to modify the Richardson-Lucy (RL) algorithm to incorporate this new blur model. In addition, we show that our projective motion RL algorithm can incorporate state-of-the-art regularization priors to improve the deblurred results. The projective motion path blur model, along with the modified RL algorithm, is detailed, together with experimental results demonstrating its overall effectiveness. Statistical analysis on the algorithm's convergence properties and robustness to noise is also provided.

257 citations


Reference BookDOI
01 Jan 2011
TL;DR: This chapter discusses the development of the Super-Resolution Framework, a Bayesian Framework for Super Resolution, and its applications in Medical Imaging and Multichannel Sampling.
Abstract: Image Super-Resolution: Historical Overview and Future Challenges, J. Yang and T. Huang Introduction to Super-Resolution Notations Techniques for Super-Resolution Challenge issues for Super-Resolution Super-Resolution Using Adaptive Wiener Filters, R.C. Hardie Introduction Observation Model AWF SR Algorithms Experimental Results Conclusions Acknowledgments Locally Adaptive Kernel Regression for Space-Time Super-Resolution, H. Takeda and P. Milanfar Introduction Adaptive Kernel Regression Examples Conclusion AppendiX Super-Resolution With Probabilistic Motion Estimation, M. Protter and M. Elad Introduction Classic Super-Resolution: Background The Proposed Algorithm Experimental Validation Summary Spatially Adaptive Filtering as Regularization in Inverse Imaging, A. Danielyan, A. Foi, V. Katkovnik, and K. Egiazarian Introduction Iterative filtering as regularization Compressed sensing Super-resolution Conclusions Registration for Super-Resolution, P. Vandewalle, L. Sbaiz, and M. Vetterli Camera Model What Is Resolution? Super-Resolution as a Multichannel Sampling Problem Registration of Totally Aliased Signals Registration of Partially Aliased Signals Conclusions Towards Super-Resolution in the Presence of Spatially Varying Blur, M. Sorel, F. Sroubek and J. Flusser Introduction Defocus and Optical Aberrations Camera Motion Blur Scene Motion Algorithms Conclusion Acknowledgments Toward Robust Reconstruction-Based Super-Resolution, M. Tanaka and M. Okutomi Introduction Overviews Robust SR Reconstruction with Pixel Selection Robust Super-Resolution Using MPEG Motion Vectors Robust Registration for Super-Resolution Conclusions Multi-Frame Super-Resolution from a Bayesian Perspective, L. Pickup, S. Roberts, A. Zisserman and D. Capel The Generative Model Where Super-Resolution Algorithms Go Wrong Simultaneous Super-Resolution Bayesian Marginalization Concluding Remarks Variational Bayesian Super Resolution Reconstruction, S. Derin Babacan, R. Molina, and A.K. Katsaggelos Introduction Problem Formulation Bayesian Framework for Super Resolution Bayesian Inference Variational Bayesian Inference Using TV Image Priors Experiments Estimation of Motion and Blur Conclusions Acknowledgements Pattern Recognition Techniques for Image Super-Resolution, K. Ni and T.Q. Nguyen Introduction Nearest Neighbor Super-Resolution Markov Random Fields and Approximations Kernel Machines for Image Super-Resolution Multiple Learners and Multiple Regressions Design Considerations and Examples Remarks Glossary Super-Resolution Reconstruction of Multi-Channel Images, O.G. Sezer and Y. Altunbasak Introduction Notation Image Acquisition Model Subspace Representation Reconstruction Algorithm Experiments & Discussions Conclusion New Applications of Super-Resolution in Medical Imaging, M.D.Robinson, S.J. Chiu, C.A. Toth, J.A. Izatt, J.Y. Lo, and S. Farsiu Introduction The Super-Resolution Framework New Medical Imaging Applications Conclusion Acknowledgment Practicing Super-Resolution: What Have We Learned? N. Bozinovic Abstract Introduction MotionDSP: History and Concepts Markets and Applications Technology Results Lessons Learned Conclusions

241 citations


Proceedings ArticleDOI
28 Nov 2011
TL;DR: A simple and effective automatic image blurred region detection and classification technique that can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.
Abstract: Many digital images contain blurred regions which are caused by motion or defocus. Automatic detection and classification of blurred image regions are very important for different multimedia analyzing tasks. This paper presents a simple and effective automatic image blurred region detection and classification technique. In the proposed technique, blurred image regions are first detected by examining singular value information for each image pixels. The blur types (i.e. motion blur or defocus blur) are then determined based on certain alpha channel constraint that requires neither image deblurring nor blur kernel estimation. Extensive experiments have been conducted over a dataset that consists of 200 blurred image regions and 200 image regions with no blur that are extracted from 100 digital images. Experimental results show that the proposed technique detects and classifies the two types of image blurs accurately. The proposed technique can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.

187 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper presents a joint blind image restoration and recognition method based on the sparse representation prior to handle the challenging problem of face recognition from low-quality images, where the degradation model is realistic and totally unknown.
Abstract: Most previous visual recognition systems simply assume ideal inputs without real-world degradations, such as low resolution, motion blur and out-of-focus blur. In presence of such unknown degradations, the conventional approach first resorts to blind image restoration and then feeds the restored image into a classifier. Treating restoration and recognition separately, such a straightforward approach, however, suffers greatly from the defective output of the ill-posed blind image restoration. In this paper, we present a joint blind image restoration and recognition method based on the sparse representation prior to handle the challenging problem of face recognition from low-quality images, where the degradation model is realistic and totally unknown. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the training set, which indicates the identity of the test image. The proposed algorithm achieves simultaneous restoration and recognition by iteratively solving the blind image restoration in pursuit of the sparest representation for recognition. Based on such a sparse representation prior, we demonstrate that the image restoration task and the recognition task can benefit greatly from each other. Extensive experiments on face datasets under various degradations are carried out and the results of our joint model shows significant improvements over conventional methods of treating the two tasks independently.

183 citations


Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper shows how finding similar ST-patches can be done both efficiently (with a randomized-based search in space-time), and at sub-frame accuracy (despite severe motion aliasing).
Abstract: Spatial Super Resolution (SR) aims to recover fine image details, smaller than a pixel size. Temporal SR aims to recover rapid dynamic events that occur faster than the video frame-rate, and are therefore invisible or seen incorrectly in the video sequence. Previous methods for Space-Time SR combined information from multiple video recordings of the same dynamic scene. In this paper we show how this can be done from a single video recording. Our approach is based on the observation that small space-time patches (‘ST-patches’, e.g., 5×5×3) of a single ‘natural video’, recur many times inside the same video sequence at multiple spatio-temporal scales. We statistically explore the degree of these ST-patch recurrences inside ‘natural videos’, and show that this is a very strong statistical phenomenon. Space-time SR is obtained by combining information from multiple ST-patches at sub-frame accuracy. We show how finding similar ST-patches can be done both efficiently (with a randomized-based search in space-time), and at sub-frame accuracy (despite severe motion aliasing). Our approach is particularly useful for temporal SR, resolving both severe motion aliasing and severe motion blur in complex ‘natural videos’.

125 citations


Journal ArticleDOI
TL;DR: This work shows that PET motion correction using motion fields measured with tagged-MRI in simultaneous PET-MRI acquisitions can be made practical for clinical application and that doing so has the potential to remove motion blur in whole-body PET studies of the torso.
Abstract: Purpose: We propose a novel approach for PET respiratory motion correction using tagged-MRI and simultaneous PET-MRI acquisitions.Methods: We use a tagged-MRI acquisition followed by motion tracking in the phase domain to estimate the nonrigid deformation of biological tissues during breathing. In order to accurately estimate motion even in the presence of noise and susceptibility artifacts, we regularize the traditional HARP tracking strategy using a quadratic roughness penalty on neighboring displacement vectors (R-HARP). We then incorporate the motion fields estimated with R-HARP in the system matrix of an MLEM PET reconstruction algorithm formulated both for sinogram and list-mode data representations. This approach allows reconstruction of all detected coincidences in a single image while modeling the effect of motion both in the emission and the attenuation maps. At present, tagged-MRI does not allow estimation of motion in the lungs and our approach is therefore limited to motion correction in soft tissues. Since it is difficult to assess the accuracy of motion correction approaches in vivo, we evaluated the proposed approach in numerical simulations of simultaneous PET-MRI acquisitions using the NCAT phantom. We also assessed its practical feasibility in PET-MRI acquisitions of a small deformable phantom that mimics the complex deformation pattern of a lung that we imaged on a combined PET-MRI brain scanner.Results: Simulations showed that the R-HARP tracking strategy accurately estimated realistic respiratory motion fields for different levels of noise in the tagged-MRI simulation. In simulations of tumors exhibiting increased uptake, contrast estimation was 20% more accurate with motion correction than without. Signal-to-noise ratio (SNR) was more than 100% greater when performing motion-corrected reconstruction which included all counts, compared to when reconstructing only coincidences detected in the first of eight gated frames. These results were confirmed in our proof-of-principle PET-MRI acquisitions, indicating that our motion correction strategy is accurate, practically feasible, and is therefore ready to be tested in vivo.Conclusions: This work shows that PET motion correction using motion fields measured with tagged-MRI in simultaneous PET-MRI acquisitions can be made practical for clinical application and that doing so has the potential to remove motion blur in whole-body PET studies of the torso.

116 citations


Proceedings ArticleDOI
12 Dec 2011
TL;DR: A novel filter selection procedure is developed that robustly solves this problem even with noisy input data and achieves significant improvements in numerical error and visual image quality.
Abstract: We introduce a novel approach for image space adaptive sampling and reconstruction in Monte Carlo rendering. We greedily minimize relative mean squared error (MSE) by iterating over two steps. First, given a current sample distribution, we optimize over a discrete set of filters at each pixel and select the filter that minimizes the pixel error. Next, given the current filter selection, we distribute additional samples to further reduce MSE. The success of our approach hinges on a robust technique to select suitable per pixel filters. We develop a novel filter selection procedure that robustly solves this problem even with noisy input data. We evaluate our approach using effects such as motion blur, depth of field, interreflections, etc. We provide a comparison to a state-of-the-art algorithm based on wavelet shrinkage and show that we achieve significant improvements in numerical error and visual image quality. Our approach is simple to implement, requires a single user parameter, and is compatible with standard Monte Carlo rendering.

109 citations


Journal ArticleDOI
TL;DR: A generalized approach to decoupled shading from visibility sampling in graphics pipelines, which is inspired by the Reyes rendering architecture and can be thought of as a generalization of multisample antialiasing to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading.
Abstract: We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading.We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates.

107 citations


Proceedings ArticleDOI
25 Jul 2011
TL;DR: This paper describes a general reconstruction technique that exploits the anisotropy in the temporal light field and permits efficient reuse of samples between pixels, multiplying the effective sampling rate by a large factor.
Abstract: Traditionally, effects that require evaluating multidimensional integrals for each pixel, such as motion blur, depth of field, and soft shadows, suffer from noise due to the variance of the high-dimensional integrand. In this paper, we describe a general reconstruction technique that exploits the anisotropy in the temporal light field and permits efficient reuse of samples between pixels, multiplying the effective sampling rate by a large factor. We show that our technique can be applied in situations that are challenging or impossible for previous anisotropic reconstruction methods, and that it can yield good results with very sparse inputs. We demonstrate our method for simultaneous motion blur, depth of field, and soft shadows.

Journal ArticleDOI
TL;DR: This paper proposes a novel method for recognizing faces degraded by blur using deblurring of facial images and shows and explains how combining the proposed facial deblur inference with the local phase quantization (LPQ) method can further enhance the performance.
Abstract: This paper proposes a novel method for recognizing faces degraded by blur using deblurring of facial images. The main issue is how to infer a Point Spread Function (PSF) representing the process of blur on faces. Inferring a PSF from a single facial image is an ill-posed problem. Our method uses learned prior information derived from a training set of blurred faces to make the problem more tractable. We construct a feature space such that blurred faces degraded by the same PSF are similar to one another. We learn statistical models that represent prior knowledge of predefined PSF sets in this feature space. A query image of unknown blur is compared with each model and the closest one is selected for PSF inference. The query image is deblurred using the PSF corresponding to that model and is thus ready for recognition. Experiments on a large face database (FERET) artificially degraded by focus or motion blur show that our method substantially improves the recognition performance compared to existing methods. We also demonstrate improved performance on real blurred images on the FRGC 1.0 face database. Furthermore, we show and explain how combining the proposed facial deblur inference with the local phase quantization (LPQ) method can further enhance the performance.

Patent
17 Apr 2011
TL;DR: In this article, a digital image acquisition system includes a portable apparatus for capturing digital images and a digital processing component for detecting, analyzing and informing the photographer regarding motion blur, and for reducing camera motion blur in an image captured by the apparatus.
Abstract: A digital image acquisition system includes a portable apparatus for capturing digital images and a digital processing component for detecting, analyzing and informing the photographer regarding motion blur, and for reducing camera motion blur in an image captured by the apparatus. The digital processing component operates by comparing the image with at least one other image, for example a preview image, of nominally the same scene taken outside the exposure period of the main image. In one embodiment the digital processing component determines the degree of artifacts and whether to inform the user that the image is blurred by identifying at least one feature in a single preview image which is relatively less blurred than the corresponding feature in the main image. In another embodiment, the digital processing component calculates a trajectory of at least one feature in a plurality of preview images, extrapolates such feature on to the main image, calculates a PSF in respect of the feature, and informs the user based on the calculated PSF. In another embodiment the digital processing unit after determining the degree of blur notifies the photographer of the existing blur or automatically invokes consecutive captures. In another embodiment, the digital processing unit determines whether the image quality is acceptable from real time analysis of the captured image and provides this information to the user. Such real time analysis may use the auto focusing mechanism to qualitatively determine the PSF.

Journal ArticleDOI
TL;DR: A novel method of detecting splicing in images, using discrepancies in motion blur, is presented, which uses motion blur estimation through image gradients in order to detect inconsistencies between the spliced region and the rest of the image.
Abstract: The widespread availability of photo manipulation software has made it unprecedentedly easy to manipulate images for malicious purposes. Image splicing is one such form of tampering. In recent years, researchers have proposed various methods for detecting such splicing. In this paper, we present a novel method of detecting splicing in images, using discrepancies in motion blur. We use motion blur estimation through image gradients in order to detect inconsistencies between the spliced region and the rest of the image. We also develop a new measure to assist in inconsistent region segmentation in images that contain small amounts of motion blur. Experimental results show that our technique provides good segmentation of regions with inconsistent motion blur. We also provide quantitative comparisons with other existing blur-based techniques over a database of images. It is seen that our technique gives significantly better detection results.

Journal ArticleDOI
TL;DR: In this article, the authors evaluate multidimensional integrals for each pixel, such as motion blur, depth of field, and soft shadows, and show that these effects suffer from noise due to the variance of the high-dimensional high-di...
Abstract: Traditionally, effects that require evaluating multidimensional integrals for each pixel, such as motion blur, depth of field, and soft shadows, suffer from noise due to the variance of the high-di...

Journal ArticleDOI
TL;DR: This work analyzes the mechanisms that produce motion blur in recording devices and the methods that can simulate it in computer generated images, and proposes a number of alternative classifications that will help the reader identify the best technique for a particular scenario.
Abstract: Motion blur is a fundamental cue in the perception of objects in motion. This phenomenon manifests as a visible trail along the trajectory of the object and is the result of the combination of relative motion and light integration taking place in film and electronic cameras. In this work, we analyse the mechanisms that produce motion blur in recording devices and the methods that can simulate it in computer generated images. Light integration over time is one of the most expensive processes to simulate in high-quality renders, as such, we make an in-depth review of the existing algorithms and we categorize them in the context of a formal model that highlights their differences, strengths and limitations. We finalize this report proposing a number of alternative classifications that will help the reader identify the best technique for a particular scenario.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper presents a unified algorithm to handle motion blur for visual SLAM, including the blur-robust data association method and the fast deblurring method, which improve each other's results significantly.
Abstract: Handling motion blur is one of important issues in visual SLAM. For a fast-moving camera, motion blur is an unavoidable effect and it can degrade the results of localization and reconstruction severely. In this paper, we present a unified algorithm to handle motion blur for visual SLAM, including the blur-robust data association method and the fast deblurring method. In our framework, camera motion and 3-D point structures are reconstructed by SLAM, and the information from SLAM makes the estimation of motion blur quite easy and effective. Reversely, estimating motion blur enables robust data association and drift-free localization of SLAM with blurred images. The blurred images are recovered by fast deconvolution using SLAM data, and more features are extracted and registered to the map so that the SLAM procedure can be continued even with the blurred images. In this way, visual SLAM and deblurring are solved simultaneously, and improve each other's results significantly.

Proceedings ArticleDOI
12 Dec 2011
TL;DR: This work introduces a method for increasing the framerate of real-time rendering applications that reconstructs intermediate frames from a pair of consecutive rendered frames, and reduces both shading and geometry computations by performing reprojection using only image-based buffers.
Abstract: We introduce a method for increasing the framerate of real-time rendering applications. Whereas many existing temporal upsampling strategies only reuse information from previous frames, our bidirectional technique reconstructs intermediate frames from a pair of consecutive rendered frames. This significantly improves the accuracy and efficiency of data reuse since very few pixels are simultaneously occluded in both frames. We present two versions of this basic algorithm. The first is appropriate for fill-bound scenes as it limits the number of expensive shading calculations, but involves rasterization of scene geometry at each intermediate frame. The second version, our more significant contribution, reduces both shading and geometry computations by performing reprojection using only image-based buffers. It warps and combines the adjacent rendered frames using an efficient iterative search on their stored scene depth and flow. Bidirectional reprojection introduces a small amount of lag. We perform a user study to investigate this lag, and find that its effect is minor. We demonstrate substantial performance improvements (3--4x) for a variety of applications, including vertex-bound and fill-bound scenes, multi-pass effects, and motion blur.

Proceedings ArticleDOI
05 Jan 2011
TL;DR: A system for localizing and deblurring motion-blurred 2D barcodes based on coded exposure imaging which enables well-posed de-convolution and decoding over a wider range of velocities and an approach for motion direction estimation in coded exposure images.
Abstract: We describe a system for localizing and deblurring motion-blurred 2D barcodes. Previous work on barcode detection and deblurring has mainly focused on 1D barcodes, and has employed traditional image acquisition which is not robust to motion blur. Our solution is based on coded exposure imaging which, as we show, enables well-posed de-convolution and decoding over a wider range of velocities. To serve this solution, we developed a simple and effective approach for 2D barcode localization under motion blur, a metric for evaluating the quality of the deblurred 2D barcodes, and an approach for motion direction estimation in coded exposure images. We tested our system on real camera images of three popular 2D barcode symbologies: Data Matrix, PDF417 and Aztec Code.

Journal ArticleDOI
TL;DR: This work presents a fast-updating Lissajous image reconstruction methodology that uses an increased image frame rate beyond the pattern repeat rate generally used in conventional Liss jouissance methods, at the cost of single-frame pixel density.
Abstract: We present a fast-updating Lissajous image reconstruction methodology that uses an increased image frame rate beyond the pattern repeat rate generally used in conventional Lissajous image reconstruction methods. The fast display rate provides increased dynamic information and reduced motion blur, as compared to conventional Lissajous reconstruction, at the cost of single-frame pixel density. Importantly, this method does not discard any information from the conventional Lissajous image reconstruction, and frames from the complete Lissajous pattern can be displayed simultaneously. We present the theoretical background for this image reconstruction methodology along with images and video taken using the algorithm in a custom-built miniaturized multiphoton microscopy system.

Journal ArticleDOI
TL;DR: Evaluations of listener motion strategies demonstrated that two strategies were particularly effective for localisation, simply to move towards the most likely source location, which is beneficial in increasing signal-to-noise ratio, particularly in reverberant conditions.

Journal ArticleDOI
TL;DR: In order to ensure the accuracy of star sensor in high dynamic environment, a restoration method was proposed to remove the multiple blur.

Journal ArticleDOI
TL;DR: A post-reconstruction framework to estimate respiratory motion and reduce the motion blur of PET images acquired in ungated mode is proposed and it was demonstrated that WRL outperforms RL and a Wiener filter in reducing themotion blur with one to two more iterations.
Abstract: Respiratory motion results in significant motion blur in thoracic positron emission tomography (PET) imaging. Existing approaches to correct the blurring artifact involve acquiring the images in gated mode and using complicated reconstruction algorithms. In this paper, we propose a post-reconstruction framework to estimate respiratory motion and reduce the motion blur of PET images acquired in ungated mode. Our method includes two steps: one is to use minmax directional derivative analysis and local auto-correlation analysis to identify the two parameters blur direction and blur extent, respectively, and another is to employ WRL, a trous wavelet-denoising modified Richardson-Lucy (RL) deconvolution, to reduce the motion blur based on identified parameters. The mobile phantom data were first used to test the method before it was applied to 32 cases of clinical lung tumor PET data. Results showed that the blur extent of phantom images in different directions was accurately identified, and WRL can remove the majority of motion blur within ten iterations. The blur extent of clinical images was estimated to be 12.1 ± 3.7 mm in the direction of 74 ± 3° relative to the image horizontal axis. The quality of clinical images was significantly improved, both from visual inspection and quantitative evaluation after deconvolution. It was demonstrated that WRL outperforms RL and a Wiener filter in reducing the motion blur with one to two more iterations. The proposed method is easy to implement and thus could be a useful tool to reduce the effect of respiration in ungated thoracic PET imaging.

Journal ArticleDOI
TL;DR: This paper shows how to obtain better alpha mattes by introducing a regularization term in the matting formulation to account for the object’s motion, and outlines a method for estimating local object motion based on local gradient statistics from the original image.
Abstract: This paper addresses the problem of matting motion blurred objects from a single image. Existing single image matting methods are designed to extract static objects that have fractional pixel occupancy. This arises because the physical scene object has a finer resolution than the discrete image pixel and therefore only occupies a fraction of the pixel. For a motion blurred object, however, fractional pixel occupancy is attributed to the object's motion over the exposure period. While conventional matting techniques can be used to matte motion blurred objects, they are not formulated in a manner that considers the object's motion and tend to work only when the object is on a homogeneous background. We show how to obtain better alpha mattes by introducing a regularization term in the matting formulation to account for the object's motion. In addition, we outline a method for estimating local object motion based on local gradient statistics from the original image. For the sake of completeness, we also discuss how user markup can be used to denote the local direction in lieu of motion estimation. Improvements to alpha mattes computed with our regularization are demonstrated on a variety of examples.

Patent
17 Feb 2011
TL;DR: In this article, a steerable telescope was used to acquire the image of a subject's facial feature(s), and a first computational imaging element was configured to minimize the effect of defocus in the image.
Abstract: The present invention relates to a method and apparatus for long-range facial and ocular acquisition. One embodiment of a system for acquiring an image of a subject's facial feature(s) includes a steerable telescope configured to acquire the image of the facial feature(s), a first computational imaging element configured to minimize the effect of defocus in the image of the facial feature(s), and a second computational imaging element configured to minimize the effects of motion blur. In one embodiment, the detecting, the acquiring, the minimizing the effect of the motion, and the minimizing the effect of the defocus are performed automatically without a human input.

Journal ArticleDOI
TL;DR: A fully 3-D deblurring method is proposed in this paper to reduce motion blur from a single motion-blurred video to produce a high-resolution video in both space and time.
Abstract: Although spatial deblurring is relatively well understood by assuming that the blur kernel is shift invariant, motion blur is not so when we attempt to deconvolve on a frame-by-frame basis: this is because, in general, videos include complex, multilayer transitions. Indeed, we face an exceedingly difficult problem in motion deblurring of a single frame when the scene contains motion occlusions. Instead of deblurring video frames individually, a fully 3-D deblurring method is proposed in this paper to reduce motion blur from a single motion-blurred video to produce a high-resolution video in both space and time. Unlike other existing approaches, the proposed deblurring kernel is free from knowledge of the local motions. Most importantly, due to its inherent locally adaptive nature, the 3-D deblurring is capable of automatically deblurring the portions of the sequence, which are motion blurred, without segmentation and without adversely affecting the rest of the spatiotemporal domain, where such blur is not present. Our method is a two-step approach; first we upscale the input video in space and time without explicit estimates of local motions, and then perform 3-D deblurring to obtain the restored sequence.

Proceedings ArticleDOI
25 Jul 2011
TL;DR: A novel visibility algorithm for rendering motion blur with per-pixel anti-aliasing over a rectangular group of pixels, and together with the time dimension, a two-dimensional spatio-temporal visibility problem needs to be solved per line sample.
Abstract: We present a novel visibility algorithm for rendering motion blur with per-pixel anti-aliasing. Our algorithm uses a number of line samples over a rectangular group of pixels, and together with the time dimension, a two-dimensional spatio-temporal visibility problem needs to be solved per line sample. In a coarse culling step, our algorithm first uses a bounding volume hierarchy to rapidly remove geometry that does not overlap with the current line sample. For the remaining triangles, we approximate each triangle's depth function, along the line and along the time dimension, with a number of patch triangles. We resolve for the final color using an analytical visibility algorithm with depth sorting, simple occlusion culling, and clipping. Shading is decoupled from visibility, and we use a shading cache for efficient reuse of shaded values. In our results, we show practically noise-free renderings of motion blur with high-quality spatial anti-aliasing and with competitive rendering times. We also demonstrate that our algorithm, with some adjustments, can be used to accurately compute motion blurred ambient occlusion.

Proceedings ArticleDOI
05 Aug 2011
TL;DR: A hierarchical traversal algorithm for stochastic rasterization of motion blur, which efficiently reduces the number of inside tests needed to resolve spatio-temporal visibility and is based on novel tile against moving primitive tests that also provide temporal bounds for the overlap.
Abstract: We present a hierarchical traversal algorithm for stochastic rasterization of motion blur, which efficiently reduces the number of inside tests needed to resolve spatio-temporal visibility. Our method is based on novel tile against moving primitive tests that also provide temporal bounds for the overlap. The algorithm works entirely in homogeneous coordinates, supports MSAA, facilitates efficient hierarchical spatio-temporal occlusion culling, and handles typical game workloads with widely varying triangle sizes. Furthermore, we use high-quality sampling patterns based on digital nets, and present a novel reordering that allows efficient procedural generation with good anti-aliasing properties. Finally, we evaluate a set of hierarchical motion blur rasterization algorithms in terms of both depth buffer bandwidth, shading efficiency, and arithmetic complexity.

Patent
21 Dec 2011
TL;DR: In this article, a method for compensating for motion blur when performing 3D scanning of at least a part of an object by means of a 3D scanner is proposed, where the motion blur occurs because the scanner and the object are moved relative to each other while the scanning is performed.
Abstract: Disclosed is a method for compensating for motion blur when performing a 3D scanning of at least a part of an object by means of a 3D scanner, where the motion blur occurs because the scanner and the object are moved relative to each other while the scanning is performed, and where the motion blur compensation comprises : - determining whether there is a relative motion between the scanner and the object during the acquisition of the sequence of focus plane images; - if a relative motion is determined, performing a motion compensation based on the determined motion; and - generating a 3D surface from the sequence of focus plane images.

Proceedings ArticleDOI
12 Dec 2011
TL;DR: A computer is trained to select still frames from video that work well as candid portraits and is better able to select quality still frames than previous techniques, such as simply omitting frames that contain blinking or motion blur, or selecting only smiles.
Abstract: In this paper, we train a computer to select still frames from video that work well as candid portraits. Because of the subjective nature of this task, we conduct a human subjects study to collect ratings of video frames across multiple videos. Then, we compute a number of features and train a model to predict the average rating of a video frame. We evaluate our model with cross-validation, and show that it is better able to select quality still frames than previous techniques, such as simply omitting frames that contain blinking or motion blur, or selecting only smiles. We also evaluate our technique qualitatively on videos that were not part of our validation set, and were taken outdoors and under different lighting conditions.