scispace - formally typeset
Search or ask a question

Showing papers on "Motion blur published in 2000"


Patent
16 May 2000
TL;DR: In this paper, a method and apparatus for creating motion blur, depth of field, and screen door effects when rendering three-dimensional graphics data are disclosed, including a graphics system configured with a graphics processor, a super-sampled sample buffer, and a sample-to-pixel calculation unit.
Abstract: A method and apparatus for creating motion blur, depth of field, and screen door effects when rendering three-dimensional graphics data are disclosed. A graphics system configured with a graphics processor, a super-sampled sample buffer, and a sample-to-pixel calculation unit is disclosed. The graphics processor may be configured to use a sample mask to select different subsets of sample coordinates to be rendered for a particular frame. Each subset may be rendered applying a different set of attributes, and the resulting samples may then be stored together in the sample buffer. The sample-to-pixel calculation unit may be configured to filter the samples into output pixels that are provided to a display device. The attributes that may be changed from subset to subset include the viewpoint, the time at which objects in the data are rendered, which objects or geometric primitives in the data are rendered, the position of objects in the data, the color of objects in the data, the transparency of objects in the data, and the shape of objects in the data.

103 citations


Patent
16 May 2000
TL;DR: In this article, a system consisting of a graphics processor, a sample buffer, and a sample-to-pixel calculation unit is described. Butler et al. present a method for performing blur effects, including motion blur and depth of field effects.
Abstract: A graphics system and method for performing blur effects, including motion blur and depth of field effects, are disclosed. In one embodiment the system comprises a graphics processor, a sample buffer, and a sample-to-pixel calculation unit. The graphics processor is configured to receive a set of three-dimensional (3D) graphics data and render a plurality of samples based on the set of 3D graphics data. The processor is also configured to generate sample tags for the samples, wherein the sample tags are indicative of whether or not the samples are to be blurred. The super-sampled sample buffer is coupled to receive and store the samples from the graphics processor. The sample-to-pixel calculation unit is coupled to receive and filter the samples from the super-sampled sample buffer to generate output pixels, which in turn are displayable to form an image on a display device. The sample-to-pixel calculation units are configured to select the filter attributes used to filter the samples into output pixels based on the sample tags.

90 citations


Proceedings ArticleDOI
04 Dec 2000
TL;DR: Restoration examples are given on simulated data as well as on images with real motion blur, when several blurred images are given, and the direction of motion blur in each image is different.
Abstract: Images degraded by motion blur can be restored when several blurred images are given, and the direction of motion blur in each image is different. Given two motion blurred images, best restoration is obtained when the directions of motion blur in the two images are orthogonal. Motion blur at different directions is common, for example, in the case of small hand-held digital cameras due to fast hand trembling and the light weight of the camera. Restoration examples are given on simulated data as well as on images with real motion blur.

81 citations


Patent
07 Dec 2000
TL;DR: In this article, the authors propose a driving process for a liquid crystal display in which a plurality of scanning lines 2 and a majority of signal lines 3 are disposed in a grid-like arrangement, and display of an image corresponding with image data is performed by selecting any one of the scanning line 2 at one time, and altering the state of a Liquid Crystal via the signal line 3.
Abstract: The provision of a liquid crystal display driving process which prevents the appearance of motion blur without any increase in circuit size or any reduction in panel numerical aperture. A driving process for a liquid crystal display in which a plurality of scanning lines 2 and a plurality of signal lines 3 are disposed in a grid like arrangement, and display of an image corresponding with image data is performed by selecting any one of the scanning lines 2 at one time, and altering the state of a liquid crystal via the signal line 3, wherein an image data selection period t1 and a black display selection period t2 are set within a time frame shorter than the time necessary for scanning any one of the aforementioned scanning lines 2, and an image corresponding with the aforementioned image data is displayed via the aforementioned signal line 3 during the image data selection period t1, and a monochromatic image is displayed via the aforementioned signal line 3 during the black display selection period t2.

74 citations


Patent
14 Dec 2000
TL;DR: In this paper, a method and apparatus for estimating the blur parameters of blurred images (g 1, g 2 ) are disclosed, which has one or more image sensors for capturing the blurred images and a plurality of correlators for performing autocorrelation of the blurred image and cross-correlation between the two images.
Abstract: A method and apparatus ( 9 ) for estimating the blur parameters of blurred images (g 1 , g 2 ) are disclosed. The apparatus ( 9 ) has one or more image sensors ( 10 ) for capturing the blurred images (g 1 , g 2 ), a plurality of correlators ( 20, 30 ) for performing autocorrelation of the blurred image (g 1 ) and cross-correlation between the two images (g 1 , g 2 ) respectively, and an error function calculator ( 40 ) for evaluating an error function over all possible displacements using the results from the correlators ( 20, 30 ). The apparatus ( 9 ) further includes an extreme locater ( 50 ) for finding the displacement with the minimum value for the error function.

47 citations


Patent
31 Jul 2000
TL;DR: In this paper, the authors present techniques for simulating and generating lifelike digital representations of scenes that may include one or more dynamic linear objects such as rope, antennae, hair, feathers, fur and grasses.
Abstract: The present invention presents techniques for simulating and generating lifelike digital representations of scenes that may include one or more dynamic linear objects such as rope, antennae, hair, feathers, fur and grasses. Individualized geometric models may be defined for a selected, manageable subset of the linear objects. By interpolating and animating based upon these defined geometric models, subject to user-specified object parameters, a dynamic simulation and a static geometry may subsequently be generated. Rendering techniques according to the present invention may be used to generate two-dimensional image projections of these geometries, as seen from a specified point of view. These steps of geometric interpolation and rendering are performed in an iterative manner, such that numerous fine-grained objects may be processed and rendered portion by portion, thereby greatly reducing the computational complexity of the task. Other aspects of the invention include the use of depth information regarding individual hairs for purposes of performing accurate rendering. Selected portions of depth and velocity information are also retained and utilized in order to composite and motion blur, in a reasonably accurate manner, the rendered hair image projections together with other three-dimensional scene elements.

41 citations


Proceedings ArticleDOI
03 May 2000
TL;DR: It is shown how a high-degree resampling filter, such as the Gaussian or cubic-spline lead to superior results in all cases, without the need for constant scrutiny and hand tweaking on the part of the animator.
Abstract: We propose the use of high-degree resampling filters for improved temporal antialiasing, or as the result is often called motion blur. Without temporal antialiasing, strange effects can occur within an animation, for example, wheels can appear to spin backwards at a certain speed. In a typical effort to overcome this, the camera shutter is left open over some period of time during the frame, leading to temporal box filtering. Even with a temporal box filter, aliasing can still occur. We show how a high-degree resampling filter, such as the Gaussian or cubic-spline lead to superior results in all cases, without the need for constant scrutiny and hand tweaking on the part of the animator.

21 citations


Journal ArticleDOI
01 May 2000
TL;DR: This work has developed a new algorithm for motion compensation that is very regular, inherently avoids rounding errors, and outperforms the earlier methods.
Abstract: Plasma Display Panels (PDPs) suffer from motion artifacts caused by the subfield driving method. Motion compensation helps to prevent motion blur and dynamic contouring artifacts. Our new algorithm for motion compensation is very regular, inherently avoids rounding errors, and outperforms the earlier methods.

16 citations


Proceedings ArticleDOI
05 May 2000
TL;DR: Ongoing work in Activity Monitoring (AM) for the Airborne Video Surveillance (AVS) project is described, which uses frame-to- frame affine-warping stabilization and temporally integrated intensity differences to detect independent motion.
Abstract: Ongoing work in Activity Monitoring (AM) for the Airborne Video Surveillance (AVS) project is described The goal for AM is to recognize activities of interest involving humans and vehicles using airborne video AM consists of three major components: (1) moving object detection, tracking, and classification; (2) image to site-model registration; (3) activity recognition Detecting and tracking humans and vehicles form airborne video is a challenging problem due to image noise, low GSD, poor contrast, motion parallax, motion blur, and camera blur, and camera jitter We use frame-to- frame affine-warping stabilization and temporally integrated intensity differences to detect independent motion Moving objects are initially tracked using nearest-neighbor correspondence, followed by a greedy method that favors long track lengths and assumes locally constant velocity Object classification is based on object size, velocity, and periodicity of motion Site-model registration uses GPS information and camera/airplane orientations to provide an initial geolocation with +/- 100m accuracy at an elevation of 1000m A semi-automatic procedure is utilized to improve the accuracy to +/- 5m The activity recognition component uses the geolocated tracked objects and the site-model to detect pre-specified activities, such as people entering a forbidden area and a group of vehicles leaving a staging area

15 citations


01 Jan 2000
TL;DR: In this article, the authors used frame-to-frame affine-warping stabilization and temporally integrated intensity differences to detect independent motion of moving objects using nearest-neighbor correspondence, followed by a greedy method that favors long track lengths and assumes locally constant velocity.
Abstract: Ongoing work in Activity Monitoring (AM) for the Airborne Video Surveillance (AVS) project is described. The goal for AM is to recognize activities of interest involving humans and vehicles using airborne video. AM consists of three major components: (1) moving object detection, tracking, and classification; (2) image to site-model registration; (3) activity recognition. Detecting and tracking humans and vehicles form airborne video is a challenging problem due to image noise, low GSD, poor contrast, motion parallax, motion blur, and camera blur, and camera jitter. We use frame-to- frame affine-warping stabilization and temporally integrated intensity differences to detect independent motion. Moving objects are initially tracked using nearest-neighbor correspondence, followed by a greedy method that favors long track lengths and assumes locally constant velocity. Object classification is based on object size, velocity, and periodicity of motion. Site-model registration uses GPS information and camera/airplane orientations to provide an initial geolocation with +/- 100m accuracy at an elevation of 1000m. A semi-automatic procedure is utilized to improve the accuracy to +/- 5m. The activity recognition component uses the geolocated tracked objects and the site-model to detect pre-specified activities, such as people entering a forbidden area and a group of vehicles leaving a staging area.

9 citations


Proceedings ArticleDOI
03 Sep 2000
TL;DR: A new method to extract motion information from motion streaks to determine the foci of expansion, the center of rotation, or motion parallel to the image plane and determines the direction of motion.
Abstract: Motion blur arises when motion is fast relative to the shutter time of a camera. Unlike most work on motion blur, which considers the streaks due to motion blur to be noisy artifacts. In this paper we introduce a new method to extract motion information from these streaks. Previous methods with similar goals first extract an optic flow field from local information in the motion streaks and then infer global motion parameters. On the contrary, we adopt a more direct feature-based approach and extract global motion parameters from the motion streaks. We first extract edges in the motion blurred images, which we then group to determine the foci of expansion, the center of rotation, or motion parallel to the image plane. Furthermore, we determine the direction of motion. We present results on real images from a mobile robot in cluttered environments.

Patent
Kenichi Mori1, 健一 森
28 Jun 2000
TL;DR: In this article, the three apexes of a triangle are processed within an image obtained through the projection of a three-dimensional model onto two dimensions, and then inputting the attribute information of each apex at the two times, a triangular prism structure constructed of six apexes is divided into three triangular prism structures, and image drawing information for images of motion blur is obtained through linear processing in the threedimensional space.
Abstract: PROBLEM TO BE SOLVED: To provide an image drawing method making it possible to draw images of motion blur of higher quality through an image drawing process with smaller amounts of operations. SOLUTION: In a three-dimensional space constructed of coordinate axes for pixels of a two-dimensional image and a time axis and formed by inputting at two times the three-dimensional coordinate values of the three apexes of a triangle to be processed within an image obtained through the projection of a three-dimensional model onto two dimensions, and then inputting the attribute information of each apex at the two times, a triangular prism structure constructed of six apexes is divided into thee triangular prism structures, and image drawing information for images of motion blur is obtained through linear processing in the three-dimensional space based on the two-dimensional coordinate values and the attribute information of four of the six apexes at each triangular prism and on the two times, the four apexes constituting each triangular prism.

01 Jan 2000
TL;DR: In this paper, a method is presented to extract and track the position of a guide wire during endovascular interventions under X-ray fluoroscopy using a template matching procedure.
Abstract: A method is presented to extract and track the position of a guide wire during endovascular interventions under X-ray fluoroscopy The method can be used to improve guide wire visualization in the low quality fluoroscopy images A two-step procedure is utilized to track the guide wire in subsequent frames First a rough estimate of the displace- ment is obtained using a template matching procedure Subsequently, the position of the guide wire is determined by fitting the guide wire to a feature image in which line-like structures are enhanced In this opti- mization step the influence of the scale at which the feature is calculated and the additional value of using directional information is investigated The method is applied both on the original and subtraction images Us- ing the proper parameter settings, the guide wire could successfully be tracked based on the original images, in 141 out of 146 frames from 5 image sequences Endovascular interventions are rapidly advancing as an alternative for conven- tional invasive open surgical procedures During interventions a guide wire is advanced under fluoroscopic control Accurate positioning of the guide wire and the catheter with regard to the vasculature is a prerequisite for a successful procedure Owing to the low dose used in fluoroscopy in order to minimize the radiation exposure of the patient and radiologist, image quality is often limited Additionally, motion artifacts owing to patient motion and guide wire motion, further limit image quality Therefore, a method to extract and track guide wires is presented which can deal with the low signal-to-noise ratio inherent to fluo- roscopic images, and disappearance of the guide wire in a few frames owing to motion blur The method can be used to improve guide wire visualization, potentially enabling a reduction in radiation exposure It can also be used to detect the position of the guide wire in world coordinates for registration with preoperatively acquired images as a navigation tool for radiologists

Book ChapterDOI
12 Mar 2000
TL;DR: In this article, a linear imaging model is introduced, based on which an identity equation is derived between the original images and the desired image in which the object in the scene is selectively visually manipulated, and a linear lilter is derived based on the principle.
Abstract: A new image generation scheme is introduced. The scheme linearly fuses multiple images, which are differently focused, into a new image in which objcets in the scene is applied arbitrary linear processing such as focus(blurring), enhancement, extraction, shifting etc,. The novelty of the work is that it does not require any segmentation to produce visual effects on objects in the scene. It typically uses two images for the scene: in one of them, the foreground is in focus and the background is out of focus, in the other image, vice versa. A linear imaging model is introduced, based on which an identity equation is derived between the original images and the desired image in which the object in the scene is selectively visually manipulated, and the desired image is directly produced from the original images. A linear lilter is derived based on the principle. The two original images which are applied linear filters are added and result in the desired image. Various visual effects are examined such as focus manipulation, motion blur, enhancement, extraction, shifting etc,. A special camera is also introduced, by which synchronized three differently focused video can be captured, and dynamic scene can also handled by the scheme. Realtime implementation using the special camera for processing moving scenes is described, too.


Journal ArticleDOI
Cliff Reiter1
TL;DR: This column closely follows the discussion of motion blur in [4] and remarks that the Fourier transform of an argument array results in another array, typically a complex valued array.
Abstract: F AST FOultIElt TII.ANSFOILMS make it possible to convert back and forth between image space and frequency space. It is amazing that it is possible to remove of some of the blur caused by motion or improper focus [5]. J offers a powerful add-on packageffl'w based on work of Frigo and Johnson [2], that makes it easy to combine fast Fourier transforms with other array processing. This column closely follows the discussion of motion blur in [4]. We will not discuss the mathematics of computing the Fourier transform. However, we remark that the Fourier transform of an argument array results in another array. While the input array is ordinarily a real array representing an image, any complex valued array is allowed. The result, typically a complex valued array, can be thought of as giving a real and imaginary part for each entry. Or, more important for applications, we can think of those complex entries as a magnitude and phase. Images of the magnitude essentially give diffraction patterns [1,3]. Thefl~o add-on package can be downloaded from wzow.jsoftware.com. We load the ffkw add-on package, create a small example and compute the. Fourier transform of that array as follows: require 'system\\packages\\fftw\\fftw' ]a=:

Patent
15 May 2000
TL;DR: In this paper, a method for estimating motion blur information from degenerated image is proposed, which is based on the Fourier transform of PSF(point spread function) and the least square method.
Abstract: PURPOSE: A method for estimating motion blur information from degenerated image is to get the motion blur information such as a direction and a length of the motion blur using a pole feature of a since function shown in Fourier transform of PSF(point spread function) CONSTITUTION: A method for estimating motion blur information comprises steps of: fourier-transforming a degenerated image in which noise is added; detecting each pole at each reference line of a fourier-transformed image and performing a horizontal and vertical extreme pole trajectory transformation; setting an SDR(signal dominant region) and an NDR(noise dominant region) from the transformed horizontal and vertical extreme pole trajectory; and applying a weighted value which is in inverse proportion to a curvature of a circumference of the extreme pole in the set SDR and estimating a direction of the motion blur using least square method The fourier-transforming process comprises steps of modeling the degenerated image according to the direction and the length of the motion blur and fourier-transforming the modeled image

Proceedings ArticleDOI
06 Jun 2000
TL;DR: The degradation due to motion blur is quantified by assessing the blur's effect on the Detective Quantum Efficiency (DQE), which captures the signal- and noise transfer properties of an imaging system.
Abstract: In continuous X-ray fluoroscopy images are sometimes blurred uniformly due to motion of the operating table. Additionally, low-dose fluoroscopy images are degraded by relatively strong quantum noise, which is not affected by the blur. We quantify the degradation due to motion blur by assessing the blur's effect on the Detective Quantum Efficiency (DQE), which captures the signal- and noise transfer properties of an imaging system. The estimation of the motion blur parameters, viz. direction and extent, is carried out one after the other. The central idea for direction detection is to apply an inertia-like matrix to the global spectrum of the degraded image, which assesses the anisotropy caused by the blur. Once the blur direction is obtained by this tensor approach, its extent is identified from an estimated power spectrum or bispectrum slice along this direction. The decision for either method is based on the eigenvalues of the inertia matrix. The blur parameters are used as input for a nonlinear Maximum-a- posteriori restoration technique based on a Generalized Gauss- Markov Random field for which several efficient optimization strategies are presented. This approach includes a thresholdless edge model. The DQE is generalized as a quality measure to assess the signal- and noise transfer properties of the restoration method.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
15 Dec 2000
TL;DR: In this article, the authors investigated the relative effects of two types of degradations on the ability of observers to recognize targets in a vibrating video sequence and determined the required precision of the deblurring and registration processes.
Abstract: There are two kinds of video image sequence distortions caused by vibration of the camera. The first is the vibration of the line-of-sight causing location changes of the scene in successive frames. The second effect is the blur of each frame of the sequence due to frame motion during its exposure. In this work, the relative effects of these two types of degradations on the ability of observers to recognize targets are investigated. This study is useful for evaluating the amount of effort required to compensate each effect. We found that the threshold contrast needed to recognize a target in a vibrating video sequence under certain conditions is more affected by the motion blur of each frame than the oscillation of the line-of-sight. For digital sequence restoration methods, this study determines the required precision of the deblurring and registration processes. It shows that the deblurring process should not be neglected as it often is.

Book
01 Jan 2000
TL;DR: This tutorial discusses After Effects: The Big Picture, Adobe's Dynamic Media Suite, and how to manage Footage into a Project and control Layer Properties with Keyframes using the Effect Controls Window.
Abstract: 1. After Effects: The Big Picture. The QuickPro Series. Adobe's Dynamic Media Suite. Minimum Requirements. Suggested System Features. Professional System Additions. New Features. The Standard Version versus the Production Bundle. Mac versus Windows. Overview of the Work Flow. Overview of the Interface. Grouping Related Windows. Using Tabbed Windows and Palettes. 2. Importing Footage into a Project. Creating a Project. Importing Files. Importing Unrecognized Files. Still Image Durations. Importing Files with Alpha Channels. Importing PhotoShop and Illustrator Files. Importing Premiere Projects. Importing an After Effects Project. Importing Audio. Importing Motion Footage. Setting the Frame Rate. Looping. Film: Pulldown. Pixel Aspect Ratios. Setting the EPS Options. Interpretation of Footage. 3. Managing Footage. Displaying Information in the Project Window. Sorting Footage in the Project Window. Using Labels. Organizing Footage in Folders. Renaming and Removing Items. Proxies and Placeholders. Proxies in the Project Window. Viewing Footage. Opening Footage in the Original Application. The Footage Window. Cueing Motion Footage. Magnification and Safe Zones. Video-Safe Zones and Grid. Rulers and Guides. Snapshots. Channels. 4. Compositions. Creating Compositions. Choosing Composition Settings. The Composition and Time Layout Windows. Setting the Time. Adding Footage to a Composition. Solids and Adjustment Layers. Nesting Compositions. 5. Layer Basics. Selecting Layers. Stacking Order. Naming Layers. Layer Numbers and Labels. Switching Video and Audio On and Off. Locking a Layer. Basic Layer Switches. Shy Layers. Continuously Rasterizing a Layer. Quality Setting Switches. 6. Layer Editing. Viewing Layers in the Time Graph and Layer Windows. The Time Graph. Navigating the Time Graph. The Layer Window. Trimming Layers. Moving Layers in Time. Sequencing and Overlapping Layers. Other Editing Functions. Using Markers. 7. Properties and Keyframes. Layer Property Types. Viewing Properties. Setting Global versus Animated Properties. Viewing Spatial Controls in the Composition Window. Transform Properties. Using Slider Controls for Setting Properties. Alternative Controls for Setting Properties. Nudging Layer Properties. Audio Properties. Viewing an Audio Waveform. Using the Audio Palette. Controlling Layer Properties with Keyframes. Moving Keyframes. Copying Values and Keyframes. 8. Playback, Previews, and RAM. Frame Rates of Playback versus Preview. Using the Time Controls. Scrubbing Video. Suppressing Window Updates. Scrubbing Audio. Types of Previews. Setting the Work Area. Managing RAM. 9. Mask Essentials. Creating Masks. Viewing Masks. Targeting Masks. Drawing Mask Shapes. Control Points and Segments. Building a Path. How Mighty Is Your Pen? Selecting Masks and Points. Opening and Closing Paths. Scaling and Rotating Masks. Changing the Shape of a Mask. Path Editing Tools. Using Masks from PhotoShop and Illustrator. Inverting a Mask. Locking and Hiding Masks. Moving Masks Relative to the Layer Image. Feathering Mask Edges. Mask Modes. 10. Effects Fundamentals. Standard Effect Categories. Applying Effects. Viewing Effect Property Controls. Using the Effect Controls Window. Removing and Resetting Effects. Effect Information and Options. Disabling Effects Temporarily. Adjusting Effects in the Effect Controls Window. Setting Color in the Effect Controls Window. Setting Values in the Effect Controls Window. Setting the Angle in the Effect Controls Window. Setting an Effect Point. Using Favorite Effects. Copying and Pasting Effects. Applying Multiple Effects. Applying Effects to an Adjustment Layer. Compound Effects. Using Compound Effects. Animating Effects. 11. Standard Effects in Action. Adjust Effects. Using the Levels Effect. Audio Effects. Using the Stereo Mixer. Blur and Sharpen Effects. Using the Compound Blur Effect. Channel Effects. Using the Blend Effect. Distort Effects. Using the PS + Spherize Effect. Image Control Effects. Perspective Effects. Using Bevel Alpha. Render Effects. Using the Audio Waveform Effect. Stylize Effects. Text Effects. Using Path Text. Time Effects. Using the Echo Effect. Transition Effects. Using the Gradient Wipe. Video Effects. Using Broadcast Colors. 12. More Layer Techniques. Frame Blending. Motion Blur. Layer Modes. Layer Mode Types. Preserving Underlying Transparency. Track Mattes. 13. Keyframe Interpolation. Spatial and Temporal Interpolation. Interpolation Types. Viewing Motion Paths and Spatial Interpolation. Comparing Motion Paths and Mask Paths. Adjusting Spatial Interpolation in the Motion Path. Default Spatial Interpolation. Mastering Spatial Interpolation. Auto-Orient Rotation. Pasting Mask Paths into Motion Paths. Viewing a Value, Speed, or Velocity Graph. Speed, Velocity, and Acceleration. Viewing Speed in the Motion Path. Changing Property Values in a Value Graph. Recognizing Temporal Interpolation. Adjusting Temporal Interpolation in the Value Graph. Adjusting Temporal Interpolation in the Speed and Velocity Graphs. Mastering Temporal Interpolation. Adjusting Temporal Interpolation Numerically. Keyframe Assistants. Roving Keyframes. Changing Interpolation. 14. Complex Projects. Nesting. Rendering Order. Subverting the Render Order. Synchronizing Time. Using the Flowchart View. Pre-composing. Collapsing Transformations. Recursive Switches. Pre-Rendering. 15. Production Bundle Techniques. Keying Effects. Using a Garbage Matte. Using the Color Difference Key. Matte Tools. Using the Simple Choker. Production Bundle Audio Effects. Using Parametric EQ. Production Bundle Visual Effects. Using the Displacement Map Effect. Using the Glow Effect. Using Other Glow Settings. Using the Wiggler Plug-In Palette. Keyframe Assistants. Using Motion Math. Motion Math Scripts Included with the Production Bundle. Understanding the Motion Math Dialog Box. Running a Motion Math Script. 16. Output. The Render Queue Window. Making a Movie. Using the Render Queue Window. Pausing and Stopping Rendering. Assigning Multiple Output Modules. Choosing Render Settings. Choosing Output Module Settings. Creating Templates. Exporting Single Still Images. Setting Overflow Volumes. Movie Files and Compression. QuickTime Video Codecs. Video for Windows Codecs. Index.

Proceedings ArticleDOI
29 Dec 2000
TL;DR: In this article, a spatially adaptive regularization algorithm for restoring out-of-focus and motion blurred images is proposed. But the method is not suitable for high-quality images.
Abstract: Recently, many image processing systems are required to offer high-quality images. For example, when we use a surveillance system with a digital camcorder and a digital video recorder, it is highly probable that the acquired image suffers from various image degradation, such as motion blur and out-of- focus blur. With such degradation, we cannot obtain important information. This is mainly cased by limited performance of image formation system. In this work, we investigate the causes of focus blur and motion blur. With the simultaneous formulation of the corresponding degradation, we propose a spatially adaptive regularization algorithm for restoring out- of-focus and motion blurred images. Accordingly, we present a method to estimate blur parameters and a segmentation method for spatially adaptive processing.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
28 Dec 2000
TL;DR: In this paper, the authors dealt with restoration of composite frame images degraded by motion and used a new method for identification of the motion from each field, which can be applied to both uniform velocity motion and nonlinear motion.
Abstract: A composite frame image is an interlaced composition of two sub-image odd and even fields. Such image type is common in many imaging systems that produce video sequences. When relative motion between the camera and the scene occurs during the imaging process, two types of distortion degrade the image: the edge 'staircase effect' due to the shifted appearances of the objects in successive fields, and blur due to the scene motion during each field exposure. This paper deals with restoration of composite frame images degraded by motion. In contrast to other previous works that dealt with only uniform velocity motion, here we consider a more general case of nonlinear motion. Since conventional motion identification techniques used in other works can not be employed in the case of nonlinear motion, a new method for identification of the motion from each field is used. Results of motion identification and image restoration for various motion types are presented.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
03 Sep 2000
TL;DR: A simple algorithm for 3D motion estimation under orthography using 3D-to-2D line correspondences is proposed and the watershed algorithm is employed for successful feature extraction in the presence of defocus or motion blur.
Abstract: Over the past few years, virtual studios applications have significantly attracted the attention of the entertainment industry. Optical tracking systems for virtual sets production have become particularly popular tending to substitute electro-mechanical ones. In this work, an existing optical tracking system is revisited, in order to tackle with inherent degenerate cases; namely, reduction of the perspective projection model to the orthographic one and blurring of the blue screen. In this context, we propose a simple algorithm for 3D motion estimation under orthography using 3D-to-2D line correspondences. In addition, the watershed algorithm is employed for successful feature extraction in the presence of defocus or motion blur.