scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 2005"


Journal ArticleDOI
TL;DR: The purpose of this article is to provide a nuts and bolts procedure for calculating scale factors used for reconstructing images directly in SNR units and to validate the method for SNR measurement with phantom data.
Abstract: The method for phased array image reconstruction of uniform noise images may be used in conjunction with proper image scaling as a means of reconstructing images directly in SNR units. This facilitates accurate and precise SNR measurement on a per pixel basis. This method is applicable to root-sum-of-squares magnitude combining, B(1)-weighted combining, and parallel imaging such as SENSE. A procedure for image reconstruction and scaling is presented, and the method for SNR measurement is validated with phantom data. Alternative methods that rely on noise only regions are not appropriate for parallel imaging where the noise level is highly variable across the field-of-view. The purpose of this article is to provide a nuts and bolts procedure for calculating scale factors used for reconstructing images directly in SNR units. The procedure includes scaling for noise equivalent bandwidth of digital receivers, FFTs and associated window functions (raw data filters), and array combining.

479 citations


Proceedings ArticleDOI
09 May 2005
TL;DR: A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor, which exploits a periodicity in the second derivative signal of interpolated images.
Abstract: A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor. The interpolation detection algorithm exploits a periodicity in the second derivative signal of interpolated images. The algorithm performs well for a wide variety of interpolation factors, both integer factors and non-integer factors. The algorithm performance is noted with respect to a digital camera's "digital zoom" feature. Overall the algorithm has demonstrated robust results and might prove to be useful for situations where an original resolution of the image determines the action of an image processing chain.

208 citations


Journal ArticleDOI
TL;DR: Novel efforts are made to further extend this systematic variational framework to the inpainting of oscillatory textures, interpolation of missing wavelet coefficients as in the wireless transmission of JPEG2000 images, as well as light‐adapted inPainting schemes motivated by Weber's law in visual perception.
Abstract: Inpainting is an image interpolation problem with broad applications in image and vision analysis. Described in the current expository paper are our recent efforts in developing universal inpainting models based on the Bayesian and variational principles. Discussed in detail are several variational inpainting models built upon geometric image models, the associated Euler-Lagrange PDEs and their geometric and dynamic interpretations, as well as effective computational approaches. Novel efforts are then made to further extend this systematic variational framework to the inpainting of oscillatory textures, interpolation of missing wavelet coefficients as in the wireless transmission of JPEG2000 images, as well as light-adapted inpainting schemes motivated by Weber's law in visual perception. All these efforts lead to the conclusion that unlike many familiar image processors such as denoising, segmentation, and compression, the performance of a variational/Bayesian inpainting scheme much more crucially depends on whether the image prior model well resolves the spatial coupling (or geometric correlation) of image features. As a highlight, we show that the Besov image models appear to be less interesting for image inpainting in the wavelet domain, highly contrary to their significant roles in thresholding-based denoising and compression. Thus geometry is the single most important keyword throughout this paper. © 2005 Wiley Periodicals, Inc.

168 citations


Journal ArticleDOI
TL;DR: Experimental results show that the subjective quality of the interpolated images is substantially improved by using the proposed algorithm compared with that of using conventional interpolation algorithms.

124 citations


Journal ArticleDOI
10 Oct 2005
TL;DR: A novel and robust colour watermarking approach for applications in copy protection and digital archives by hiding the watermark into DC components of the colour image directly in the spatial domain, followed by a saturation adjustment technique performed in RGB space.
Abstract: Most colour watermarking methods are realised by modifying the image luminance or by processing each component of colour space separately This paper presents a novel and robust colour watermarking approach for applications in copy protection and digital archives The proposed scheme considers chrominance information that can be utilised at information embedding This work presents an approach for hiding the watermark into DC components of the colour image directly in the spatial domain, followed by a saturation adjustment technique performed in RGB space The merit of the proposed approach is that it not only provides promising watermarking performance but also is computationally efficient Experimental results demonstrate that this scheme successfully makes the watermark perceptually invisible and robust to image processing operations such as general image processing operations (JPEG2000, JPEG-loss compression, lowpass filtering, and medium filtering), image scaling and image cropping

104 citations


Proceedings ArticleDOI
28 Sep 2005
TL;DR: In this paper, a hardware architecture for bicubic interpolation (HABI) is proposed and it is proposed that the system runs 10 times faster than an Intel Pentium 4-based PC at 2.4 GHz.
Abstract: One of the most extended algorithms for image scaling is bicubic interpolation. In this paper, a hardware architecture for bicubic interpolation (HABI) is proposed. The HABI proposed is integrated by three main blocks: the first one generates the interpolation coefficients, which implements the bicubic function to be used in HABI; the second one performs the interpolation process and the third one is a control unit that synchronizes the processing and the pipeline stages. The architecture work with monochromatic images, but it can be extended for working with RGB color images. Our design description is coded in Handel-C language and implemented on a Xilinx Virtex II Pro FPGA. The proposed system runs 10 times faster than an Intel Pentium 4-based PC at 2.4 GHz. Comparison with other related works are provided

77 citations


Journal ArticleDOI
TL;DR: The task of image interpolation and re-sampling for particle image velocimetry (PIV) is investigated, which is used for window shifting with sub-pixel accuracy and image or window deformation.
Abstract: The task of image interpolation and re-sampling for particle image velocimetry (PIV) is investigated, which is used for window shifting with sub-pixel accuracy and image or window deformation. A new interpolation scheme based on a Gaussian filter is introduced and compared with commonly used and widely accepted interpolation techniques in terms of the achievable root mean square deviation of the displacement estimates.

73 citations


Journal ArticleDOI
TL;DR: The proposed variational principle penalizes a departure from rigidity and thereby provides a natural generalization of strictly rigid registration techniques used widely in medical contexts.
Abstract: In this paper a variational method for registering or mapping like points in medical images is proposed and analyzed. The proposed variational principle penalizes a departure from rigidity and thereby provides a natural generalization of strictly rigid registration techniques used widely in medical contexts. Difficulties with finite displacements are elucidated, and alternative infinitesimal displacements are developed for an optical flow formulation which also permits image interpolation. The variational penalty against non-rigid flows provides sufficient regularization for a well-posed minimization and yet does not rule out irregular registrations corresponding to an object excision. Image similarity is measured by penalizing the variation of intensity along optical flow trajectories. The approach proposed here is also independent of the order in which images are taken. For computations, a lumped finite element Eulerian discretization is used to solve for the optical flow. Also, a Lagrangian integration of the intensity along optical flow trajectories has the advantage of prohibiting diffusion among trajectories which would otherwise blur interpolated images. The subtle aspects of the methods developed are illustrated in terms of simple examples, and the approach is finally applied to the registration of magnetic resonance images.

71 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: Experimental results show the proposed algorithm has low computational complexity and produces accurate rendering, especially around object boundaries, where most existing methods fail.
Abstract: The paper proposes a new approach for the image-based rendering (IBR) problem. IBR has many potential applications, such as remote reality and telepresence, in which traditional computer graphic techniques require high computational complexity. Our algorithm proactively propagates all available information from actual cameras to virtual cameras, using a depth availability assumption. This process turns the IBR problem into a nonuniform interpolation problem at the virtual camera image plane, which can be done efficiently at once for all image pixels. Experimental results show the proposed algorithm has low computational complexity and produces accurate rendering, especially around object boundaries, where most existing methods fail.

61 citations


Patent
31 May 2005
TL;DR: In this paper, an image region dividing section partitions each of input images into small regions, and a small region synthesized image generation section generates a base image of each small region, which is an image including only diffuse reflection, from each input image.
Abstract: An image region dividing section (105) partitions each of input images into a small regions, and a small region synthesized image generation section (106) generates a base image of each small region, which is an image including only diffuse reflection, from each input image An image interpolation section (107) generates by interpolation a base image of a small region of which base image cannot be generated A linearized image generation section (103) generates a linearized image, which is an image in an ideal state under given lighting condition, using the base image of each small region

58 citations


Journal ArticleDOI
TL;DR: This paper presents three computationally efficient solutions for the image interpolation problem which are developed in a general framework and the performance of all the above-mentioned solutions is compared to traditional polynomial based interpolation techniques and to iterative interpolation as well.

Journal ArticleDOI
TL;DR: A simple approach to compute the signal variance in registered images based on the sign variance and covariance of the original images, the spatial transformations computed by the registration procedure, and the interpolation or approximation kernel chosen is described.

Proceedings ArticleDOI
22 Apr 2005
TL;DR: Experimental results show that the proposed algorithm outperforms decoder-only frame rate up conversion methods and gives better performance in terms of PSNR and visual quality over encoding at full frame rate without frame skipping.
Abstract: In low bandwidth video coding applications, frame rate is reduced to increase the spatial quality of the frames. However, video sequences that are encoded at low frame rates demonstrate motion jerkiness artifacts when displayed. Therefore, a mechanism is required at the decoder to increase the frame rate while keeping an acceptable level of spatial quality. In this paper, we present a new method to perform video frame interpolation by sending effective side information for frame rate up conversion applications. The proposed scheme encodes the skipped frames lightly by sending motion vectors and an important information map which indicates to the decoder the type of interpolation method to perform. We also propose a novel overhead reduction method to keep the side information cost low. Experimental results show that the proposed algorithm outperforms decoder-only frame rate up conversion methods and gives better performance in terms of PSNR and visual quality over encoding at full frame rate without frame skipping.

Patent
17 Mar 2005
TL;DR: In this paper, an image processing apparatus is provided for accurately recognizing an edge direction to perform an accurate image interpolation, where a direction determining unit recognizes the edge direction of a remarked pixel and outputs it with information on its position to a reliability ranking unit and a directional distribution generating unit.
Abstract: An image processing apparatus is provided for accurately recognizing an edge direction to perform an accurate image interpolation. A direction determining unit recognizes an edge direction of a remarked pixel and outputs it with information on its position to a reliability ranking unit and a directional distribution generating unit. A direction interpolating unit interpolates the remarked pixel in terms of directional interpolation. The reliability ranking unit determines whether or not a interpolated pixel is properly interpolated by the direction interpolating unit, ranks its reliability, and outputs a result to a directional distribution generating unit. This directional distribution generating unit generates directional distribution based on directional information and reliability information. A direction selecting unit recognizes an edge direction based on the directional distribution generated by the directional distribution generating unit.

Proceedings ArticleDOI
16 May 2005
TL;DR: In this paper, a mask of maximum four pixels is used to calculate the final luminosity of each pixel combining two factors; the percentage of area that mask covers from each source pixel and the difference in luminosity between the source pixels.
Abstract: The proposed scaling algorithm outperforms other standard and widely used scaling techniques. The algorithm uses a mask of maximum four pixels and calculates the final luminosity of each pixel combining two factors; the percentage of area that mask covers from each source pixel and the difference in luminosity between the source pixels. The interpolation is capable of scaling both grey-scale and color images of any resolution in any scaling factor. Its key characteristics and low complexity make the interpolation very fast and capable of real time implementation. The performance results in a variety of standard tests are presented and compared to other scaling algorithms

01 Dec 2005
TL;DR: This paper presents and implements an algorithm to perform natural neighbor interpolation using graphics hardware that computes the entire scalar field induced by natural neighbour interpolation, at which point a query is a trivial array lookup, and range queries over the field are easy to perform.
Abstract: Natural neighbor interpolation is a weighted average interpolation method that is based on Voronoi tessellation. In this paper, we present and implement an algorithm to perform natural neighbor interpolation using graphics hardware. Unlike traditional software-based approaches that process one query at a time, we develop a scheme that computes the entire scalar field induced by natural neighbour interpolation, at which point a query is a trivial array lookup, and range queries over the field are easy to perform. Our approach is significantly faster than the best known software implementations, and makes use of general purpose stream programming capabilities of current graphics cards. We also present a simple scheme that requires no advanced graphics capabilities and can process natural neighbour queries faster than existing software-based approaches. Department of Computer Science, University of Arizona; quanfu@cs.arizona.edu Department of Computer Science, University of Arizona; alon@cs.arizona.edu Computer Science Division, University of California, Berkeley; vladlen@cs.berkeley.edu ATT krishnas@research.att.com ATT suresh@research.att.com

Proceedings ArticleDOI
23 May 2005
TL;DR: This work proposes an alternative of using a 4-tap diagonal FIR filter for interpolation in luminance and a three-stage recursive algorithm to reduce the number of multiplications for interpolations in chrominance.
Abstract: The paper addresses a new computing architecture for motion compensation interpolation in the ITU-T H.264 video codec. In the H.264 standard, quarter-pixel interpolation is achieved by using a 6-tap horizontal or vertical FIR filter (for luminance) and a bilinear filter (for chrominance). However, the computing procedures are irregular, thus complicating their corresponding hardware implementation. We propose an alternative of using a 4-tap diagonal FIR filter for interpolation in luminance and a three-stage recursive algorithm to reduce the number of multiplications for interpolation in chrominance. Experiments and analysis show that our proposed algorithms cause negligible quality degradation in image PSNR performance and much more efficiency in hardware implementation.

Patent
28 Oct 2005
TL;DR: In this article, a set of pixels point-symmetrical to the interpolation position P 0 in each of the search areas SA + 1 and SA − 1 are defined as pixel pairs, and differential luminance values between the individual pixels in the pixel pairs are calculated for each pixel pair.
Abstract: More accurate frame-rate conversion is carried out in a simpler circuit configuration. Search areas SA +1 and SA −1 in each of which the pixel facing the interpolation position P 0 of a pixel in an interpolation frame is taken as a central pixel are set in the current frame and immediately previous frame of an image signal, a set of pixels point-symmetrical to the interpolation position P 0 in each of the search areas SA +1 and SA −1 are defined as pixel pairs, and differential luminance values between the individual pixels in the pixel pairs are calculated for each pixel pair. Of all these pixel pairs, only that having the minimum absolute differential value is selected as interpolation pixel pair, an interpolation frame is generated from the current frame and the immediately previous frame on the basis of the interpolation pixel vector of that interpolation pixel pair.

Proceedings ArticleDOI
11 Nov 2005
TL;DR: This paper presents a fully automatic image mosaicing method for needs of wide-area video surveillance that is robust against illumination variations, moving objects, image rotation, image scaling, imaging noise, and is relatively fast to calculate.
Abstract: This paper presents a fully automatic image mosaicing method for needs of wide-area video surveillance. A pure feature-based approach was adopted for finding the registration between the images. This approach provides us with several advantages. Our method is robust against illumination variations, moving objects, image rotation, image scaling, imaging noise, and is relatively fast to calculate. We have tested the performance of the proposed method against several video sequences captured from real-world scenes. The results clearly justify our approach.

Journal ArticleDOI
TL;DR: A new reconstruction algorithm based on a spline model for images is proposed, since this is an ill-posed inverse problem and the linear system of equations obtained is solved iteratively.
Abstract: This paper presents a novel approach to the reconstruction of images from nonuniformly spaced samples. This problem is often encountered in digital image processing applications. Nonrecursive video coding with motion compensation, spatiotemporal interpolation of video sequences, and generation of new views in multicamera systems are three possible applications. We propose a new reconstruction algorithm based on a spline model for images. We use regularization, since this is an ill-posed inverse problem. We minimize a cost function composed of two terms: one related to the approximation error and the other related to the smoothness of the modeling function. All the processing is carried out in the space of spline coefficients; this space is discrete, although the problem itself is of a continuous nature. The coefficients of regularization and approximation filters are computed exactly by using the explicit expressions of B-spline functions in the time domain. The regularization is carried out locally, while the computation of the regularization factor accounts for the structure of the nonuniform sampling grid. The linear system of equations obtained is solved iteratively. Our results show a very good performance in motion-compensated interpolation applications.

Journal ArticleDOI
TL;DR: The results indicate that the proposed regularized wavelet-based image super-resolution reconstruc- tion approach has succeeded in obtaining a high-resolution image from multiple degraded observations with a high peak SNR.
Abstract: A regularized wavelet-based image super-resolution recon- struction approach is presented. The super-resolution image reconstruc- tion problem is an ill-posed inverse problem. Several iterative solutions have been proposed, but they are time-consuming. The suggested ap- proach avoids the computational complexity limitations of existing solu- tions. It is based on breaking the problem into four consecutive steps: a registration step, a multichannel regularized restoration step, a wavelet- based image fusion and denoising step, and finally a regularized image interpolation step. The objective of the wavelet fusion step is to integrate all of the data obtained from the multichannel restoration step into a single image. The wavelet denoising is performed for the low-SNR cases to reduce the noise effect. The obtained image is then interpolated using a regularized interpolation scheme. The paper explains the implementa- tion of each of these steps. The results indicate that the proposed ap- proach has succeeded in obtaining a high-resolution image from multiple degraded observations with a high peak SNR. The performance of the proposed approach is also investigated for degraded observations with different SNRs. The proposed approach can be implemented for large- dimension low-resolution images, which is not possible in most pub- lished iterative solutions. © 2005 Society of Photo-Optical Instrumentation Engineers.

Book ChapterDOI
22 Aug 2005
TL;DR: This paper presents solutions to the key issues in ZM computation under polar coordinate system, including the derivation of computation formulas, the polar pixel arrangement scheme, and the interpolation-based image conversion etc.
Abstract: As an orthogonal moment, Zernike moment (ZM) is an attractive image feature in a number of application scenarios due to its distinguishing properties. However, we find that for digital images, the commonly used Cartesian method for ZM computation has compromised the advantages of ZMs because of their non-ideal accuracy stemming from two inherent sources of errors, i.e., the geometric error and the integral error. There exists considerable errors in image reconstruction using ZMs calculated with the Cartesian method. In this paper, we propose a polar coordinate based algorithm for the computation of ZMs, which avoids the two kinds of errors and greatly improves the accuracy of ZM computation. We present solutions to the key issues in ZM computation under polar coordinate system, including the derivation of computation formulas, the polar pixel arrangement scheme, and the interpolation-based image conversion etc. As a result, ZM-based image reconstruction can be performed much more accurately.

Proceedings ArticleDOI
14 Nov 2005
TL;DR: A method for interpolating images that also preserves sharp edge information by mapping level curves of the image and results show an improvement in visual quality: edges are sharper and ringing effects are removed.
Abstract: In this paper we present a method for interpolating images that also preserves sharp edge information. We concentrate on tackling blurred edges by mapping level curves of the image. Level curves or isophotes are spatial curves with constant intensity. The mapping of these intensities can be seen as a local contrast enhancement problem, therefore we can use contrast enhancement techniques coupled with additional constraints for the interpolation problem. A great advantage of this approach is that the shape of the level set contours is preserved and no explicit edge detection is needed here. Results show an improvement in visual quality: edges are sharper and ringing effects are removed.

Patent
07 Nov 2005
TL;DR: An image scaling system for converting a sampling rate of an input video signal comprising input pixel data to produce a magnified or reduced output video image comprising output pixel data includes a first one-dimensional image scaler comprising a single-stage finite-duration impulse response (FIR) filter structure with poly-phase filter responses.
Abstract: An image scaling system for converting a sampling rate of an input video signal comprising input pixel data to produce a magnified or reduced output video image comprising output pixel data includes a first one-dimensional image scaler comprising a single-stage finite-duration impulse response (FIR) filter structure with poly-phase filter responses that receives and scales the input pixel data by a scaling factor in either a horizontal or vertical direction to produce output pixel data and a second one-dimensional image scaler comprising a single-stage FIR filter structure that is coupled in tandem to the first one-dimensional image scaler and scales by the scaling factor the output pixel data from the first one-dimensional image scaler in a direction perpendicular to that of the first one-dimensional image scaler to produce the magnified or reduced output video image Each of the first and second one-dimensional image scalers includes a reconfigurable interpolation unit that allows each single-stage FIR filter structure to switch back and forth between operating in a direct-form FIR filter mode and a transposed-form FIR filter mode while using a fixed set of system resources such that a wide range of image magnification and reduction scaling factors can be achieved using the fixed set of system resources

Proceedings ArticleDOI
14 Nov 2005
TL;DR: The Lanczos algorithm can achieve the best performance among several interpolation techniques for ultrasound breast phantom data without scarifying the quality of ultrasound images and increase the speed of system processing using down sampling strategies.
Abstract: In computer-aided analysis of mammograms, a fast processing analysis for mammograms benefits and facilitate the real time application and is also useful for radiologist to do on-line diagnosis. Down sampling is widely applied to reduce the size of large images and improve the processing speed. This paper presents the performance evaluations among several interpolation techniques (bilinear, bicubic, wavelet and Lanczos) for ultrasound breast phantom data. We also compared lesion segmentation results of down sampled images with the results of original images. Two major metrics: the Hausdorff distance measure (HDM) and polyline distance measure (PDM) were applied to measure the performance of the segmentation. We conclude that without scarifying the quality of ultrasound images, we increase the speed of system processing using our down sampling strategies. Moreover, among the four different technologies, the Lanczos algorithm can achieve the best performance.


Journal ArticleDOI
TL;DR: In this paper, five different particle image velocimetry (PIV) interrogation algorithms are tested with numerically generated particle images and two real data sets measured in turbulent flows with relatively small particle images of size 1.0-2.5.
Abstract: Five different particle image velocimetry (PIV) interrogation algorithms are tested with numerically generated particle images and two real data sets measured in turbulent flows with relatively small particle images of size 1.0–2.5 pixels. The size distribution of the particle images is analyzed for both the synthetic and the real data in order to evaluate the tendency for peak-locking occurrence. First, the accuracy of the algorithms in terms of mean bias and rms error is compared to simulated data. Then, the algorithms’ ability to handle the peak-locking effect in an accelerating flow through a 2:1 contraction is compared, and their ability to estimate the rms and Reynolds shear stress profiles in a near-wall region of a turbulent boundary layer (TBL) at Re τ =510 is analyzed. The results of the latter case are compared to direct numerical simulation (DNS) data of a TBL. The algorithms are: standard fast Fourier transform cross-correlation (FFT-CC), direct normalized cross-correlation (DNCC), iterative FFT-CC with discrete window shift (DWS), iterative FFT-CC with continuous window shift (CWS), and iterative FFT-CC CWS with image deformation (CWD). Gaussian three-point peak fitting for sub-pixel estimation is used in all the algorithms. According to the tests with the non-deformation algorithms, DNCC seems to give the best rms estimation by the wall, and the CWS methods give slightly smaller peak-locking observations than the other methods. With the CWS methods, a bias error compensation method for the bilinear image interpolation, based on the particle image size analysis, is developed and tested, giving the same performance as the image interpolation based on the cardinal function. With the CWD algorithms, the effect of the spatial filter size between the iteration loops is analyzed, and it is found to have a strong effect on the results. In the near-wall region, the turbulence intensity varies by up to 4%, depending on the chosen interrogation algorithm. In addition, the algorithms’ computational performance is tested.

Journal ArticleDOI
TL;DR: Kim et al. as mentioned in this paper proposed a new image scaling method called winscale, which can be used for scaling up and down, however, scaling down utilizing the winscale concept gives exactly the same results as the well-known bilinear interpolation.
Abstract: In the paper by Kim et al. (2003), the authors propose a new image scaling method called winscale. The presented method can be used for scaling up and down. However, scaling down utilizing the winscale concept gives exactly the same results as the well-known bilinear interpolation. Furthermore, compared to bilinear, scaling up with the proposed winscale "overlap stamping" method has very similar calculations. The basic winscale upscaling differs from the bilinear method.

Journal ArticleDOI
TL;DR: Results show that the suggested blind image reconstruction approach succeeds in estimating a high-resolution image from noisy blurred observations in the case of relatively coprime unknown blurring operators.
Abstract: We developed an approach to the blind multichannel reconstruction of high-resolution images. This approach is based on breaking the image reconstruction problem into three consecutive steps: a blind multichannel restoration, a wavelet-based image fusion, and a maximum entropy image interpolation. The blind restoration step depends on estimating the two-dimensional (2-D) greatest common divisor (GCD) between each observation and a combinational image generated by a weighted averaging process of the available observations. The purpose of generating this combinational image is to get a new image with a higher signal-to-noise ratio and a blurring operator that is a coprime with all the blurring operators of the available observations. The 2-D GCD is then estimated between the new image and each observation, and thus the effect of noise on the estimation process is reduced. The multiple outputs of the restoration step are then applied to the image fusion step, which is based on wavelets. The objective of this step is to integrate the data obtained from each observation into a single image, which is then interpolated to give an enhanced resolution image. A maximum entropy algorithm is derived and used in interpolating the resulting image from the fusion step. Results show that the suggested blind image reconstruction approach succeeds in estimating a high-resolution image from noisy blurred observations in the case of relatively coprime unknown blurring operators. The required computation time of the suggested approach is moderate.

Journal ArticleDOI
TL;DR: This paper presents how two-dimensional (2-D) image scaling can be accelerated with a new coarse-grained parallel processing method and the most promising architecture is implemented as a simulation model and the hardware resources as well as the performance are evaluated.
Abstract: Image scaling is a frequent operation in medical image processing. This paper presents how two-dimensional (2-D) image scaling can be accelerated with a new coarse-grained parallel processing method. The method is based on evenly divisible image sizes which is, in practice, the case with most medical images. In the proposed method, the image is divided into slices and all the slices are scaled in parallel. The complexity of the method is examined with two parallel architectures while considering memory consumption and data throughput. Several scaling functions can be handled with these generic architectures including linear, cubic B-spline, cubic, Lagrange, Gaussian, and sinc interpolations. Parallelism can be adjusted independent of the complexity of the computational units. The most promising architecture is implemented as a simulation model and the hardware resources as well as the performance are evaluated. All the significant resources are shown to be linearly proportional to the parallelization factor. With contemporary programmable logic, real-time scaling is achievable with large resolution 2-D images and a good quality interpolation. The proposed block-level scaling is also shown to increase software scaling performance over four times.