scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 1994"


Journal ArticleDOI
Yao Wang1, Ouseb Lee1
TL;DR: The proposed representation retains the salient merit of the original model as a feature tracker based on local and collective information, while facilitating more accurate image interpolation and prediction, and can successfully track facial feature movements in head-and-shoulder type of sequences.
Abstract: This paper introduces a representation scheme for image sequences using nonuniform samples embedded in a deformable mesh structure. It describes a sequence by nodal positions and colors in a starting frame, followed by nodal displacements in the following frames. The nodal points in the mesh are more densely distributed in regions containing interesting features such as edges and corners; and are dynamically updated to follow the same features in successive frames. They are determined automatically by maximizing feature (e.g., gradient) magnitudes at nodal points, while minimizing interpolation errors within individual elements, and matching errors between corresponding elements. In order to avoid the mesh elements becoming overly deformed, a penalty term is also incorporated, which measures the irregularity of the mesh structure. The notions of shape functions and master elements commonly used in the finite element method have been applied to simplify the numerical calculation of the energy functions and their gradients. The proposed representation is motivated by the active contour or snake model proposed by Kass, Witkin, and Terzopoulos (1988). The current representation retains the salient merit of the original model as a feature tracker based on local and collective information, while facilitating more accurate image interpolation and prediction. Our computer simulations have shown that the proposed scheme can successfully track facial feature movements in head-and-shoulder type of sequences, and more generally, interframe changes that can be modeled as elastic deformation. The treatment for the starting frame also constitutes an efficient representation of arbitrary still images. >

206 citations


Journal ArticleDOI
TL;DR: A technique for measuring the motion of a rigid, textured plane in the frontoparallel plane is developed and tested on synthetic and real image sequences and offers a simple, novel way of tackling the ‘aperture’ problem.
Abstract: A technique for measuring the motion of a rigid, textured plane in the frontoparallel plane is developed and tested on synthetic and real image sequences. The parameters of motion — translation in two dimensions, and rotation about a previously unspecified axis perpendicular to the plane — are computed by a single-stage, non-iterative process which interpolates the position of the moving image with respect to a set of reference images. The method can be extended to measure additional parameters of motion, such as expansion or shear. Advantages of the technique are that it does not require tracking of features, measurement of local image velocities or computation of high-order spatial or temporal derivatives of the image. The technique is robust to noise, and it offers a simple, novel way of tackling the ‘aperture’ problem. An application to the computation of robot egomotion is also described.

163 citations


Proceedings Article
01 Jan 1994
TL;DR: This work describes an approach based on the abstract task of "manifold learning" and presents results on both synthetic and real image sequences to solve the problem of interpolating between specified images in an image sequence.
Abstract: The problem of interpolating between specified images in an image sequence is a simple, but important task in model-based vision. We describe an approach based on the abstract task of "manifold learning" and present results on both synthetic and real image sequences. This problem arose in the development of a combined lip-reading and speech recognition system.

109 citations


Journal ArticleDOI
TL;DR: A new method for the interpolation that has to be performed when motion estimation and compensation are applied to interlaced sequences with subpel accuracy is introduced, based on the assumption that a uniform motion exists between two successive frames.
Abstract: This paper introduces a new method for the interpolation that has to be performed when motion estimation and compensation are applied to interlaced sequences with subpel accuracy. It is based on the assumption that a uniform motion exists between two successive frames. The exact formulas for the estimation are derived. They show that in order to obtain a correct interpolation of each field of one frame, it is necessary to use the information of both fields of this frame. Because the ideal filters have infinite impulse responses, the filter design is discussed, and the efficiency is measured for typical sequences. >

67 citations


Proceedings ArticleDOI
13 Nov 1994
TL;DR: This paper focuses on solving the first two sub-problems simultaneously, using the expectation-maximization (EM) algorithm, and experimental results are presented that demonstrate the effectiveness of this approach.
Abstract: In applications that demand highly detailed images, it is often not feasible nor sometimes possible to acquire images of such high resolution by just using hardware (high precision optics and charge coupled devices (CCDs)). Instead, image processing methods may be used to construct a high resolution image from multiple, degraded, low resolution images. It is assumed that the low resolution images have been sub-sampled and displaced by sub-pixel shifts. Therefore, the problem can be divided into three sub-problems: registration (estimating the shifts), restoration, and interpolation. This paper focuses on solving the first two sub-problems simultaneously, using the expectation-maximization (EM) algorithm. Experimental results are presented that demonstrate the effectiveness of this approach. >

65 citations


Patent
09 Sep 1994
TL;DR: The RSA as mentioned in this paper is a resampling application specific integrated circuit (RSA) that supports image interpolation or decimation by any arbitrary factor in order to provide flexibility, and utilizes a neighborhood of up to 9 x 9 pixels to produce image data of high quality.
Abstract: A resampling application specific integrated circuit (RSA) supports image interpolation or decimation by any arbitrary factor in order to provide flexibility, and utilizes a neighborhood of up to 9 x 9 pixels to produce image data of high quality. The RSA contains a separate vertical and horizontal filter units for vertical resizing and horizontal resizing operations, vertical and horizontal position accumulator units, a configuration register unit for loading the vertical and horizontal position accumulator units, and a memory management unit to interface the RSA to external memory banks. The vertical and horizontal filter units contain nine multipliers and nine corresponding coefficient memories, with each memory preferably containing storage space for thirty-two coefficients. The coefficients are addressed on a pixel by pixel basis in response to the outputs of the vertical and horizontal position accumulator units. The RSA is designed to handle an input data stream that contains multiple color components and simultaneously resizes all of the color components.

41 citations


Patent
23 Jun 1994
TL;DR: In this article, the inverse transorm process is used to combine the inverse DCT of an appropriate size with scaling on the resulting reconstructed image, where the data is stored as DCT values of blocks of size P×Q and an output image is to be scaled by a factor of R in one dimension and S in the second dimension.
Abstract: The objects of this invention are accomplished by combinging the inverse DCT of an appropriate size with scaling on the resulting reconstructed image. In particular, if the data is stored as DCT values of blocks of size P×Q and an output image is to be scaled by a factor of R in one dimension and S in the second dimension, then the process is performed in two stages. First a scaling of factor K1/P in the first dimension and a scaling of factor L1/Q in the second dimension are done by inverse transforming with 2-dimensional DCTs of size K1×L1. A factor √(K1/P)×√(L1/Q) is absorbed into a dequantization process prior to the inverse transorm process. Then a scaling of factor K2/K3 in the first dimension and a scaling of factor L2/L3 in the second dimension is done in the spatial domain. The integers K1, K2, K3, L1, L2, L3 are chosen so the (K1K2/K3)=R, (L1L2/L3)=S, (K1/P)≧R, (L1/Q)≧S, and the ratios (K2/K3) and (L1/L2) are close to 1. The inequality constraints guarantees that the inverse DCT process does not remove low-frequency components that should be present in in image scaled down by factors R, S. The conditions that the ratios K2/K3 and L2/L3 be close to 1 are imposed so that the scaling procedure be simple (fast) to implement. Typically, but not necessarily, P=Q, R=S, K1=L1, K2=L2, K3=L3.

34 citations


Patent
30 Aug 1994
TL;DR: In this article, an improved image scaling filter for a video display where a coefficient value for the closest input lines to a given output line was determined by cubic interpolation using the line distances.
Abstract: An improved image scaling filter for a video display where a coefficient value for the closest input lines to a given output line that are less than two line lengths from the given output line are determined by cubic interpolation using the line distances. The input lines are multiplied by the coefficient for that line and the multiplied closest input line values are summed to determine the output line value.

32 citations


Journal ArticleDOI
TL;DR: Deformed cross-dissolves are methods for inconspicuous interpolation between images using correspondence points in the images to be interpolated and an algorithm for automatic establishment of these correspondence points.
Abstract: Deformed cross-dissolves are methods for inconspicuous interpolation between images. We describe methods for deformation based on scattered data interpolation methods using correspondence points in the images to be interpolated and an algorithm for automatic establishment of these correspondence points. We also describe efficient cross-dissolve algorithms for the computation of intermediate images. Results for interpolation in the field of medical visualization are presented.

26 citations


Patent
Robert C. Kidd1
25 Feb 1994
TL;DR: In this article, look-up tables for input pixel addressing and convolution of up to two adjacent input pixels were used for scaling image pixel data by a rational scale factor of L/M.
Abstract: A method and apparatus for scaling image pixel data by a rational scale factor of L/M utilizes look-up tables for input pixel addressing and convolution of up to two adjacent input pixels which offers an improved implementation of a conventional sampling rate conversion system wherein the input pixels are upsampled by a factor of L by inserting L-1 zeroes, interpolating the zero upsamples by finite impulse response techniques and down-sampling the interpolated data by a factor of M.

21 citations


Journal ArticleDOI
TL;DR: Findings indicate that texlurally-based image analysis procedures may be most appropriately applied prior to image resampling.
Abstract: An empirical investigation into the effects of bilinear and cubic convolution resampling upon the textural information content of a high spatial resolution remotely sensed image is described. Textural feature values calculated with the grey level difference histogram algorithm are shown to be modified in a complex manner by the resampling process. The modification is found to be dependent upon the resampling technique, the parameters or the texture algorithm and upon the grey level structure of the image. These findings indicate that texlurally-based image analysis procedures may be most appropriately applied prior to image resampling.

Patent
28 Jul 1994
TL;DR: In this article, a bilinear sequencer is used to interpolate image data from original image data represented in terms of pixels, each of which is defined by value and position in an original image.
Abstract: Method and apparatus for providing interpolated image data from original image data represented in terms of pixels, each pixel defined in terms of value and position in an original image, includes an original image input receiving original image data from an original image source; a page memory operatively connected to the input for storing a page of original image received; a source of interpolation parameters indicating: a slow scan initial pixel value Xinit, a fast scan initial pixel value Yinit, a fast scan x offset value FSx, a fast scan y offset value FSy, a slow scan x offset value SSx, and a slow scan y offset value SSy; a bilinear sequencer calculating for each new pixel, from the received parameters a reference pixel within the image, and a pair of interpolation coefficients for interpolating new pixel values; a memory controller retrieving to an interpolation calculator from the page memory a set of original image pixels including the pixel at the reference position, and three other pixels whose position is a predetermined function of the position of the reference position pixel for each new pixel; and an interpolation calculator calculating a new pixel value as a function of the set of original pixels directed to it by the bilinear sequencer.

Journal ArticleDOI
TL;DR: Results show that IM-GPDCT outperforms the interpolation methods from the viewpoint of the restoration error, and improves the image quality of the magnified image by obtaining image sharpness, nonjagged edges and reproduction of the original texture.
Abstract: This paper proposes an image magnification method “IM-GPDCT,” which is an iterative application of the Gerchberg-Papoulis (GP) algorithm with the discrete cosine transform (DCT), and its performance is evaluated. Conventional image magnification by interpolation has a problem in that degradation of image quality is inevitable since essentially it is impossible to restore the spatial high-frequency components which are lost in the observation process. For this point, IM-GPDCT improves image quality of a magnified image by utilizing a concept to restore the spatial high-frequency components which are lost in the observation. IM-GPDCT uses the GP algorithm as the basic principle for extending the frequency band. The spatial high-frequency components are restored during the forward and inverse iterative transform process for the image by DCT, using two constraints that the spatial extent of an image is finite, and correct information is already known for the low-frequency components. The proposed method is compared to three conventional interpolation methods from the viewpoints of a restoration error and image quality. Restoration error performance results show that IM-GPDCT outperforms the interpolation methods from the viewpoint of the restoration error. Simulation results show also that the presented method improves the image quality of the magnified image by obtaining image sharpness, nonjagged edges and reproduction of the original texture.

Proceedings ArticleDOI
16 Sep 1994
TL;DR: In this paper, a motion-compensated frame interpolation algorithm for frame (field) rate upconversion is proposed, which allows us to interpolate frame (or pairs of fields) between two originally continuous frames (fields) of a digital television sequence by preserving the stationary background.
Abstract: This paper proposes a new motion-compensated frame (field) interpolation algorithm for frame (field) rate upconversion, which allows us to interpolate frame (or pairs of fields) between two originally continuous frames (fields) of a digital television sequence by preserving the stationary background. First, for a interlace format, the de-interlacing process was used to reduce the motion range after converting the interlace format to progressive one. A video scene can be temporally categorized by the change detector into changed and unchanged regions. Each changed region is further separated into moving objects, covered and uncovered regions. To interpolate the intermediate field (frame), we have developed direct motion interpolation method and indirect motion interpolation method to fill the moving object areas in the changed regions, and then apply the forward/backward motion extrapolation method to fill the covered/uncovered regions. Finally the hybrid repetition is used to interpolate the unchanged regions. In the experiment, we will show the interpolated fields and frames for two standard image sequences.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
01 Apr 1994
TL;DR: A color video thermal printing method for printing a full-color frame image on the basis of a color video signal of a fullcolor field image by interpolation is presented in this article.
Abstract: A color video thermal printing method for printing a full-color frame image on the basis of a color video signal of a full-color field image by interpolation. Interpolation for at least one of three primary colors is made as follows: sums A1, A2 and A3 of image data D1 and D4; D2 and D5; and D3 and D6 of respectively two pixels P1 and P4; P2 and P5; and P3 and P6 are calculated, assuming the pixels P1, P2 and P3 are aligned in this order in a line of the field image, and the pixels P4, P5 and P6 are aligned in this order in an adjacent line of the field image, wherein the pixels P2 and P5 are aligned in the vertical direction with a pixel Px to be interpolated, and are disposed on opposite sides of the pixel Px. If A1 A2>A3, differences S1=|D1-D6|, S2=|D2-D5|, and S3=|D3-D4| are calculated. If S1 S2>S3, an average value (D3+D4)/2 is used as interpolation data. In other cases, an average value (D2+D5)/2 is used as interpolation data.

Journal ArticleDOI
TL;DR: A comparative series of results obtained with various interpolating algorithms, both on test functions and on real images, show that the behavior of the moving window reconstruction is outstanding in all cases, provided that a good interpolating kernel and a proper window are adopted.

Proceedings ArticleDOI
01 May 1994
TL;DR: In this paper, a method for scaling the reconstruction image without changing its fringe element, i.e., pixel size, is proposed, where the hologram is divided into small blocks and these blocks are repositioned in order to enlarge or demagnify the reconstructed image.
Abstract: A 2D image can be scaled simply by changing its pixel size. If we were to apply this model to a hologram, it would cause distortion in the reconstructed 3D images, and a change in the viewing angle. We propose a method for scaling the reconstruction image without changing its fringe element, i.e., pixel size. We divide the hologram into small blocks. Then, these blocks are repositioned in order to magnify or demagnify the reconstructed image. This method maintains the viewing angle and causes no distortion. It does, however, blur the reconstructed image. To reduce the blur, we propose a method of interpolation in the frequency domain. Results of experiments performed on an electro-holography system are also presented.

Patent
Seong-Won Lee1, Joonki Paik1
14 Jan 1994
TL;DR: In this article, a method of interpolating digital image data includes steps for calculating an absolute value of the difference between two neighboring pixels of image data, among four neighboring pixels, determining maximum and second-most maximum values among the thus-calculated absolute values and interpolating the image data using the results.
Abstract: A method of interpolating digital image data includes steps for calculating an absolute value of the difference between two neighboring pixels of image data, among four neighboring pixels of image data, determining maximum and second-most maximum values among the thus-calculated absolute values and interpolating the image data using the results. An interpolation circuit for performing above-described method includes an edge detector, a pre-filter, a zero-order interpolation area controller, a zero-order interpolator, a movement averaging device, a magnification factor controller and a post-filter.

PatentDOI
05 Aug 1994
TL;DR: This algorithm solves a quantization error problem which had prohibited practical applications of any polynomial as an interpolant for image scaling and can be applied for scaling binary images in the areas of facsimile imaging and font scaling.
Abstract: A piecewise polynomial interpolation scheme treats images as three-dimensional data in which the X and Y coordinates are the input image dimensions, and the Z coordinate is the intensity of the image. The three-dimensional data set is fitted by a surface, and a resampling process on the fitting surface provides interpolative data. A thresholding process applied on these interpolative data produces a final image output. Based on the interpolating scheme, each output pixel is a weighted average of its neighboring pixels, with weights determined by the type of the interpolant, its degree, and the desired scaling factor. The weights may be pre-calculated for fixed scaling factors, such that the convolution is accomplished by table lookup. Additionally, the resampling process may include a phase shifting to realign said sampling location with respect to an input image.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: A new FIR image interpolation filter known as a perceptually weighted least square (PWLS) filter which is designed using both sampling theory and properties of human vision to minimize the ripple response around edges of the interpolated images and to best satisfy frequency response constraints.
Abstract: Image interpolation is an important image operation. It is commonly used in image enlargement to obtain a close-up view of the detail of an image. From sampling theory, an ideal low-pass filter can be used for image interpolation. However, ripples appear around image edges which are annoying to a human viewer. The authors introduce a new FIR image interpolation filter known as a perceptually weighted least square (PWLS) filter which is designed using both sampling theory and properties of human vision. The goal of this design approach is to minimize the ripple response around edges of the interpolated images and to best satisfy frequency response constraints. The interpolation results using the proposed approach are substantially better than those resulting from replication or bilinear interpolation, and are at least as good as and possibly better than that of cubic convolution interpolation. >

Patent
Ryuichi Yamaguchi1
23 Nov 1994
TL;DR: In this article, a changeover switch is used to select the image signal interpolated with a vertical filter of a low-pass portion when the sub-sampling circuit uses a vertical filtering, or with both vertical and horizontal lowpass filters when the two dimensional filter is used.
Abstract: After limited in band area with a vertical or horizontal low-pass filter in a sub-sampling circuit at the image-signal transmitting side, an image signal is sub-sampled as inter-frame offset in the form of a quincunx, and then transmitted to an image-signal receiving side through a transmission system. The sampling rate of the image signal thus received is converted by a sampling rate conversion portion. According to the operation of a changeover switch by the operator who watches a Braun tube for image reproduction, a selector selects the image signal interpolated with a vertical filter of a low-pass portion when the sub-sampling circuit uses a vertical filter, or the image signal interpolated with both vertical and horizontal low-pass filters when the sub-sampling circuit uses a two dimensional filter. Thus, there is supplied an image signal which has been interpolated with a low-pass filter of which transmissible band area is identical with or approximate to that of the low-pass filter in the sub-sampling circuit at the signal transmitting side, such that the image signal contains no aliasing interference of a spatial frequency band area. Thus, there can be reproduced a good picture in which a horizontal straight line is not displayed as a broken line.

Patent
20 Oct 1994
TL;DR: In this paper, the authors proposed a method to avoid a sense of incongruity by devising a method such that the size of an observed image is felt identically to be an actual size even when the interval of optical axes differs between image pickup and observation.
Abstract: PURPOSE: To avoid a sense of incongruity by devising a method such that the size of an observed image is felt identically to be an actual size even when the interval of optical axes differs between image pickup and observation. CONSTITUTION: The display device is provided with a left image pickup system 1L and a right image pickup system 1R whose optical axes are arranged in parallel at a prescibed interval 10, an image input section 2, image memories 3L, 3R, an image interpolation section 4, an image output section 6 and an HMD 7. One image is displayed through image interpolation from two images so that the parallax of the images in the case of HMD display is equal to a parallax ▵1 when an object is actually viewed. Reference point extract processing is conducted by using left image data and right image data being inputs to an image interpolation section 4. A small area of the left image is segmented as a template moved in parallel to detect a position at which a total sum of differences from the right image data is least and the result is used for the pixel position of the corresponding right image. The distance information is expressed as 1/Z=(xL-xR)/(f.10)and the X-coordinate of an interpolation image is expressed as x=xL-f.11/Z based on the pixel positions xL, xR of the corresponding left and right images and the image pickup parameters.

Proceedings ArticleDOI
21 Dec 1994
TL;DR: In this article, texture measures originating from local grey level co-occurrence matrices (GLCM) and second from local autocorrelation functions (ACF) are used.
Abstract: Local second order properties, describing spatial relations between pixels are introduced into the single-point speckle adaptive filtering processes, in order to account for the effects of speckle spatial correlation and to enhance scene textural properties in the restored image. To this end, texture measures originating, first from local grey level co-occurrence matrices (GLCM), and second from the local autocorrelation functions (ACF) are used. Results obtained on 3-look processed ERS-1 FDC and PRI spaceborne images illustrate the performance allowed by the introduction of these texture measures in the structure retaining speckle adaptive filters. The observable texture in remote sensing images is related to the physical spatial resolution of the sensor. For this reason, other spatial speckle decorrelation methods, more simple and easier to implement, for example post-filtering and linear image resampling, are also presented in this paper. In the particular case of spaceborne SAR imagery, all these methods lead to visually similar results. They produce filtered (radar reflectivity) images visually comparable to optical images.

Journal ArticleDOI
TL;DR: An image processing scheme is described that is able to interpolate and enhance an image in a unified moving window operation that approximates that of the original scene.
Abstract: The use of focal plane arrays in IR imaging systems is becoming increasingly important, but in the long-wave IR region the number of detector elements in the array is limited by the current state of technology, and this in turn restricts the available spatial resolution or field of view. An image processing scheme is described that is able to interpolate and enhance an image in a unified moving window operation. The image is first expanded to the required size by pixel replication and then processed so that the resulting spectrum approximates that of the original scene. A numerical method has been developed to calculate the interpolator and its use has been demonstrated in a computer simulation.

Patent
04 Oct 1994
TL;DR: In this article, the interpolation mechanism for providing interpolated image data from original image data represented in terms of pixels, each pixel defined by value and position in an original image, includes an original input (20) receiving original image input from an original source; a page memory (12) operatively connected to the input for storing a page of original image received; a bilinear sequencer (32) calculating for each new pixel, from the received parameters a reference pixel within the image, and a pair of interpolation coefficients for interpolating new pixel values.
Abstract: Method and apparatus for providing interpolated image data from original image data represented in terms of pixels, each pixel defined in terms of value and position in an original image, includes an original image input (20) receiving original image data from an original image source; a page memory (12) operatively connected to the input for storing a page of original image received; a source (22) of interpolation parameters indicating: a slow scan initial pixel value Xinit, a fast scan initial pixel value Yinit, a fast scan x offset value FSx, a fast scan y offset value FSy, a slow scan x offset value SSx, and a slow scan y offset value SSy; a bilinear sequencer (32) calculating for each new pixel, from the received parameters a reference pixel within the image, and a pair of interpolation coefficients for interpolating new pixel values; a memory controller (22) retrieving to an interpolation calculator (34) from the page memory (12) a set of original image pixels including the pixel at the reference position, and three other pixels whose position is a predetermined function of the position of the reference position pixel for each new pixel; and the interpolation calculator (34) calculating a new pixel value as a function of the set of original pixels directed to it by the bilinear sequencer (32).

Proceedings ArticleDOI
13 Nov 1994
TL;DR: Pixels, segments, triangles, or combination for the end-points to be applied on the triangularisation, is the most common structure used in modelisation and the method is extended to non-linear interpolation.
Abstract: Defines special algorithms working directly in screen space for synthetical animation. These take place on synthetical animation based on spatio-temporal interpolation which is a 4D interpolation with three dimensions in space and one in time. In order to avoid heavy memory storage and calculations, I discuss 3D (two for the space adding to one for the time) interpolation between two key frames resulting from the projection of two 3D scenes. Extension to 4D interpolation is discussed for some special algorithms. Pixels, segments, triangles (end-points stored during the projection) matching with a spatial linear interpolation, or combination for the end-points to be applied on the triangularisation, is the most common structure used in modelisation. The method is extended to non-linear interpolation. This way permits to reduce bad visual effects and computation time which, by so alleviating one of the most important problems in synthetical animation, conciliates detailed graphical quality with real time. >

Journal ArticleDOI
TL;DR: The results show that zooming generally enhances performance, but that for zooming factors greater than 2, smooth zooming techniques need to be used.
Abstract: Standard digital video displays use 640 x 480 (NTSC) or 512 x 512 (PAL) pixels to display a full screen image, while observers searching such images for small targets (or reading text) will typically operate with a screen subtense of 25 x 35 deg. Often, however, the region of interest in these images may be about 100 x 100 pixels in size, and so subtend only about 5 x 5 deg on a standard screen. In this case enlargement-that is, increasing the angular subtense-of the region of interest may well be appropriate. To obtain a larger viewing angle, the image must be zoomed, with some form of interpolation being used to generate new intermediate pixels. This paper reports on two experiments in which the effects of various methods of zooming on target acquisition are psychophysically evaluated. The results show that zooming generally enhances performance, but that for zooming factors greater than 2, smooth zooming techniques need to be used.

Proceedings ArticleDOI
19 Apr 1994
TL;DR: This paper first investigates the geometric relationship between the point in object space and its projection onto a view image and proposes the affine transform, which utilize the geometric constraints between view images.
Abstract: This paper is concerned with the data compression and interpolation of multi-view images. We propose the affine-based disparity compensation based on a geometric relationship. We first investigate the geometric relationship between the point in object space and its projection onto a view image. Then, we propose the disparity compensation based on the affine transform, which utilize the geometric constraints between view images. In this scheme, multi-view images are compressed into the structure and texture of the triangular patches. This scheme not only compresses the multi-view image but also synthesize the view images from any viewpoints in the viewing zone, because the geometric relationship is taken into account. Finally, we report an experiment, where 19 view images were used as the original multi-view image and the amount of data was reduced to 1/19 with an SNR of 34 dB. >

Proceedings ArticleDOI
13 Nov 1994
TL;DR: Modifications are proposed to this feature-based warping algorithm which reduce artifacts created in the presence of large distortions which results in a novel interpolation scheme in which motion estimates are concentrated near object boundaries and other prominent edges.
Abstract: Describes a new method for image sequence interpolation. The approach uses a feature-based image warping algorithm to align linear image features in two or more key frames. We propose modifications to this feature-based warping algorithm which reduce artifacts created in the presence of large distortions. In addition, an algorithm for selecting and tracking the linear features required by the warping algorithm is presented. Combining feature-based image warping with a selection and tracking algorithm results in a novel interpolation scheme in which motion estimates are concentrated near object boundaries and other prominent edges. Results are shown for interpolation of a video teleconferencing sequence. >

Proceedings ArticleDOI
13 Nov 1994
TL;DR: An unusual parameterization of motion is used to model scanner trajectory efficiently as a kinematic chain and a new technique is proposed for determining whether a 3-D scene point lies within the frustum of a geometrically distorted image pixel.
Abstract: Spacecraft, aircraft, and seacraft are common free-moving platforms for the scanning of visible-light, radar, and sonar imagery. The motion of these platforms during scanning dictates image interior geometry. Hence such imagery is significantly different to that of the conventional frame-camera. It is explained why knowledge of platform motion is essential in scanned image resampling for rectification of geometric distortion. An unusual parameterization of motion is used to model scanner trajectory efficiently as a kinematic chain. Using this parameterisation, a new technique is proposed for determining whether a 3-D scene point lies within the frustum of a geometrically distorted image pixel. This inclusion test is important if the inverse mapping approach to image transformation is used for rectification. >