scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Image Analogy Based Document Image Compression

TL;DR: An image analogy based super-resolution technique that can be used as an effective tool for document image compression and multi-resolution viewing of the document.
Abstract: In this work, we propose an image analogy based super-resolution technique that can be used as an effective tool for document image compression and multi-resolution viewing of the document. The technique uses Dugad and Ahuja method for resizing document images. Next, image analogies framework is applied to add the missing high frequency information. The encoder allows user to compress spatially lower resolution version of the image using any standard image compression technique, thus enabling substantial compression. At the decoder end, the image is resized using Dugad and Ahuja method and then enhanced using image analogy by appending the missing high frequency details, using a training pair of the same class of document image.
Citations
More filters
Proceedings Article
Apurba Das1, R Remya1
01 Nov 2012
TL;DR: A novel algorithm of Orientation Scale mapped RDC (OS-RDC) which can identify and cache repeated image blocks even if they are of different size and orientations is proposed.
Abstract: Repeated appearance of any block of spatial data in document images can be cached and encoded single time to get good compression ratio. This Reusable Document Component (RDC) can replicate the blocks of each redundant image at the receiver side at different positions but with same size and orientation. We have proposed a novel algorithm of Orientation Scale mapped RDC (OS-RDC) which can identify and cache repeated image blocks even if they are of different size and orientations. Both the inter-page and intra-page redundancies are addressed ensuring significant quality preservation.

3 citations


Cites background from "Image Analogy Based Document Image ..."

  • ...In communication [11], printing [12] and digital library based applications [13], betterment in the process of document image encoding and compression for efficient utilization of bandwidth, time and storage respectively are always state-of-art research areas [1], [3], [4], [6], [15]....

    [...]

Journal ArticleDOI
TL;DR: The efficient image compression which consists of Burrows–Wheeler transform (BWT) with set partitioning in hierarchical trees (SPIHT) with lossless compression technique performed to achieve a summary of the ROI area is proposed.
Abstract: In this article, the efficient image compression which consists of Burrows–Wheeler transform (BWT) with set partitioning in hierarchical trees (SPIHT). The main phases of the proposed system are: partitioning, compression of non‐ROI areas, Fusion and compression of ROI areas. To enhance the propose of the proposed methodology, the morphological functions are updated by dividing two types of images with the consideration of dilation and erosion control. After that, the convolution and correlation in the deformation provide good accuracy of the segmentation at the fastest speed. In this proposed methodology, SPIHT encryption is a lossy compression technique aimed at understanding the non‐ROI area, while the BWT is a lossless compression technique performed to achieve a summary of the ROI area. Finally, separating these two parts of the image merges the image and reconstructs it to the desired quality. The test sends a variety of images and analyzes the performance of the proposed system using compression ratio and PSNR measurements.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: An algorithm for downsampling and also upsampling in the compressed domain which is computationally much faster, produces visually sharper images, and gives significant improvements in PSNR (typically 4-dB better compared to bilinear interpolation).
Abstract: Given a video frame in terms of its 8/spl times/8 block-DCT coefficients, we wish to obtain a downsized or upsized version of this frame also in terms of 8/spl times/8 block-DCT coefficients. The DCT being a linear unitary transform is distributive over matrix multiplication. This fact has been used for downsampling video frames in the DCT domain. However, this involves matrix multiplication with the DCT of the downsampling matrix. This multiplication can be costly enough to trade off any gains obtained by operating directly in the compressed domain. We propose an algorithm for downsampling and also upsampling in the compressed domain which is computationally much faster, produces visually sharper images, and gives significant improvements in PSNR (typically 4-dB better compared to bilinear interpolation). Specifically the downsampling method requires 1.25 multiplications and 1.25 additions per pixel of original image compared to 4.00 multiplications and 4.75 additions required by the method of Chang et al. (1995). Moreover, the downsampling and upsampling schemes combined together preserve all the low-frequency DCT coefficients of the original image. This implies tremendous savings for coding the difference between the original frame (unsampled image) and its prediction (the upsampled image). This is desirable for many applications based on scalable encoding of video. The method presented can also be used with transforms other than DCT, such as Hadamard or Fourier.

286 citations


"Image Analogy Based Document Image ..." refers background or methods in this paper

  • ...The rest of the paper is organized as follows: Section II explains the elegant Dugad and Ahuja algorithm [2] which we use to generate inputs to image analogies framework....

    [...]

  • ...The up-sampling scheme proposed by Dugad and Ahuja generates results which are somewhat blurred....

    [...]

  • ...However, it is the simplicity of the algorithm presented that makes it so interesting and practical....

    [...]

  • ...In our experiments, we *Implemented by Dugad and Ahuja method [2] Figure 1....

    [...]

  • ...1(a), is first compressed by down-sampling by a factor of 2 using the Dugad and Ahuja algorithm [2]....

    [...]

Journal ArticleDOI
TL;DR: An approach to obtaining high-resolution image reconstruction from low-resolution, blurred, and noisy multiple-input frames is presented and a recursive-least-squares approach with iterative regularization is developed in the discrete Fourier transform (DFT) domain.
Abstract: An approach to obtaining high-resolution image reconstruction from low-resolution, blurred, and noisy multiple-input frames is presented. A recursive-least-squares approach with iterative regularization is developed in the discrete Fourier transform (DFT) domain. When the input frames are processed recursively, the reconstruction does not converge in general due to the measurement noise and ill-conditioned nature of the deblurring. Through the iterative update of the regularization function and the proper choice of the regularization parameter, good high-resolution reconstructions of low-resolution, blurred, and noisy input frames are obtained. The proposed algorithm minimizes the computational requirements and provides a parallel computation structure since the reconstruction is done independently for each DFT element. Computer simulations demonstrate the performance of the algorithm. >

270 citations


Additional excerpts

  • ...This idea was first introduced by Tsai and Huang [4] for multi-frame image restoration of bandlimited signals....

    [...]

Proceedings ArticleDOI
01 Jan 2000
TL;DR: A new and efficient wavelet-based algorithm for image superresolution that exploits the interlaced sampling structure in the low resolution data is presented.
Abstract: Superresolution produces high quality, high resolution images from a set of degraded, low resolution frames. We present a new and efficient wavelet-based algorithm for image superresolution. The algorithm is a combination of interpolation and restoration processes. Unlike previous work, our method exploits the interlaced sampling structure in the low resolution data. Numerical experiments and analysis demonstrate the effectiveness of our approach and illustrate why the computational complexity only doubles for 2-D superresolution versus 1-D case.

92 citations


Additional excerpts

  • ...This idea was first introduced by Tsai and Huang [4] for multi-frame image restoration of bandlimited signals....

    [...]

Proceedings ArticleDOI
Jinyu Chu1, Ju Liu1, Jianping Qiao1, Xiaoling Wang1, Yujun Li 
08 Dec 2008
TL;DR: In this paper, a super-resolution method based on gradient-based adaptive interpolation is proposed, in which the distance between the interpolated pixel and the neighboring valid pixel is taken into account, and the interpolation coefficients take the local gradient of the original image into account.
Abstract: This paper presents a super-resolution method based on gradient-based adaptive interpolation. In this method, in addition to considering the distance between the interpolated pixel and the neighboring valid pixel, the interpolation coefficients take the local gradient of the original image into account. The smaller the local gradient of a pixel is, the more influence it should have on the interpolated pixel. And the interpolated high resolution image is finally deblurred by the application of Wiener filter. Experimental results show that our proposed method not only substantially improves the subjective and objective quality of restored images, especially enhances edges, but also is robust to the registration error and has low computational complexity.

23 citations

Posted Content
Jinyu Chu1, Ju Liu1, Jianping Qiao1, Xiaoling Wang1, Yujun Li 
TL;DR: Experimental results show that the proposed super-resolution method not only substantially improves the subjective and objective quality of restored images, especially enhances edges, but also is robust to the registration error and has low computational complexity.
Abstract: This paper presents a super-resolution method based on gradient-based adaptive interpolation. In this method, in addition to considering the distance between the interpolated pixel and the neighboring valid pixel, the interpolation coefficients take the local gradient of the original image into account. The smaller the local gradient of a pixel is, the more influence it should have on the interpolated pixel. And the interpolated high resolution image is finally deblurred by the application of wiener filter. Experimental results show that our proposed method not only substantially improves the subjective and objective quality of restored images, especially enhances edges, but also is robust to the registration error and has low computational complexity.

20 citations


Additional excerpts

  • ...This idea was first introduced by Tsai and Huang [4] for multi-frame image restoration of bandlimited signals....

    [...]