scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: A new text transformation technique called Dictionary Based Text Filter for Lossless Text Compression, which replaces words in the source file with shorter codewords, whenever they are present in an external static dictionary.
Abstract: This paper presents a new text transformation technique called Dictionary Based Text Filter for Lossless Text Compression. A text transformation technique should preserve the data during the encoding and decoding process. In the proposed approach, words in the source file are replaced with shorter codewords, whenever they are present in an external static dictionary. The rapid advantage of text transformation is that codewords are shorter than actual words and, thus, the same amount of text will require less space. As we are aware, 16% of the characters in the text files are spaces on average and hence to achieve better improvement of the compression rates for text files, the space between words can be removed from the source files. The unused ASCII characters from 128 to 255 are used to generate the codewords. This codeword combination chosen helps us to remove the space between the words in the encoded file. The proposed algorithm has been implemented and tested using standard Corpuses and compresses the files up to 85% reduction of its source file. We recommend the use of this proposed technique to compress the large text files in the field of the digitalization of library.

2 citations

Proceedings ArticleDOI
01 Dec 2009
TL;DR: A data hiding scheme for image error concealment in JPEG 2000 coding pipeline using integer wavelet, quantization index modulation (QIM) and region-of-interest (ROI) coding functionality and results show that the scheme results in efficient performance measure with various lost blocks in ROI.
Abstract: The transmission of multimedia signals, such as image over noisy wireless channels may cause inevitable errors that might severely degrade the visual message. Error concealment (EC) techniques by exploiting inherent redundancy reduce visual artifacts through post processing at the decoder. In this paper, we present a data hiding scheme for image error concealment in JPEG 2000 coding pipeline using integer wavelet, quantization index modulation (QIM) and region-of-interest (ROI) coding functionality. To restore the human interested portion of image, the information (image digest) of ROI is embedded into the region-of-background (ROB) using data hiding. Threshold based image segmentation together with morphological operations are used for finding ROI, while halftoning technique is used to obtain image digest from ROI. The simulation results show that the scheme results in efficient performance measure with various lost blocks in ROI. Keywords—Data Hiding, Error Concealment, ROI, QIM, Integer Wavelets

2 citations

Proceedings ArticleDOI
21 Apr 1995
TL;DR: Generally, the psychovisually-weighted plays a dominant role of the overall system performance and it would be effective to focus on the quantization and entropy coding procedures.
Abstract: In this paper, we have investigated the compression behavior of each processing step of the JPEG baseline image coder. The two main objectives of this research are to provide a better understanding of the JPEG system and to provide a guideline for improving the performance of any JPEG-like image coders. For performance evaluation, we have chosen the estimated entropy as a means for performance measure. The key results of this paper are: (1) Generally, the psychovisually-weighted plays a dominant role of the overall system performance. (2) The compression gain provided by the entropy coding procedure is also significant. Since there is a gap between the estimated entropies and the actual coding rates, a more efficient entropy coding procedure which reflects the signal statistics should improve the system performance. (3) The common concept of the optimal transform is variance-based, which requires a zonal selection of the transform coefficients. Since the JPEG adopts the thresholding quantization, the ordinary discussion of an optimal transform is not appropriate. A truly optimal transform should take the transform and its subsequent operations into account. In consequence, to improve the overall system performance it would be effective to focus on the quantization and entropy coding procedures.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations

Book ChapterDOI
15 Sep 2008
TL;DR: The proposed GPM approach has advantages in terms of encoding speed, parallelism, scalability, simplicity, and easy hardware implementation over other sequential lossless compression methods.
Abstract: This paper presents a new generalized particle model (GPM) to generate the prediction coding for lossless data compression Local rules for particle movement in GPM, parallel algorithm and its implementation structure to generate the desired predictive coding are discussed The proposed GPM approach has advantages in terms of encoding speed, parallelism, scalability, simplicity, and easy hardware implementation over other sequential lossless compression methods

2 citations

Journal ArticleDOI
01 Jan 2015
TL;DR: The obtained research results indicate that the proposed method performance and compression ratio comparable with analogues, and the capability of simultaneous binary images processing and the lack of computational operations.
Abstract: A suggested digital image compression method is characterized by simplicity of implementation and the lack of computing operations at the stage of prediction. Background: Miniature space watch facilities (small satellites) cannot provide a continuous data transmission because of the severely constrained requirements for energy resources usage efficiency that produces the necessity of new energy efficient low-cost digital images compression methods, which would not be inferior to the known multidigit digital images compression methods with high resolution but surpass them. Methods: The algorithm consists of the following procedures, splitting the digital images into binary images, predicting of each element of binary images being based on the theory of the conditional Markov processes with discrete states, and coding using any known algorithm (here, Huffman method is used). Results: To prove the efficiency of the proposed method, the compression of the Earth surface space pictures (group A) and photos (group B) is done. In each groupm, there were 50 onetype images. The known lossless compression algorithms such as PNG, JPEG-LS, JPEG 2000, BMF, Qlic, and ImageZero, are used as analogs. The obtained research results indicate that the proposed method performance and compression ratio comparable with analogues. Concluding Remarks: The suggested method has the following advantages: the capability of simultaneous binary images processing, the capability of digital images processing with digit capacity, and the lack of computational operations.

2 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815