scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Patent
14 Nov 2005
TL;DR: In this paper, a scaler scales an input image to a predetermined size to output an image for main JPEG, and scales the input image for thumbnail JPEG to a size smaller than the main JPEG and to outputs a thumbnail JPEG image.
Abstract: Disclosed herein is a device and method for generating a thumbnail Joint Picture Experts group (JPEG) image and a medium for storing the thumbnail JPEG. The device includes a scaler, first frame memory, a JPEG signal processing unit, second frame memory and a memory control unit. The scaler scales an input image to a predetermined size to output an image for main JPEG, and scales the input image to a size smaller than the image for main JPEG and to outputs a thumbnail JPEG image. The first frame memory stores the thumbnail JPEG image. The JPEG signal processing unit performs JPEG encoding on the image for main JPEG and the thumbnail JPEG image, and outputs a main JPEG image and a thumbnail JPEG image. The second frame memory stores the main JPEG image and the thumbnail JPEG image. The memory control unit transfers the thumbnail JPEG image to the first frame memory, and transfers the main JPEG image and the thumbnail JPEG image to the second frame memory. Additionally, the JPEG signal processing unit receives the main JPEG image and the thumbnail JPEG image stored in the second frame memory, and outputs the main JPEG image and the thumbnail JPEG image as a single JPEG file.

4 citations

Proceedings ArticleDOI
S. Annadurai1, P. Geetha
11 Dec 2005
TL;DR: A novel Secured lossless compression approach proposed in this paper is based on reversible integer wavelet transform, SPIHT algorithm, new modified runlength coding for character representation and selective bit scrambling, which employs scrambling method which is fast, simple to implement and it provides security.
Abstract: Lossless compression schemes with secure transmission play a key role in telemedicine applications that helps in accurate diagnosis and research. Traditional cryptographic algorithms for data security are not fast enough to process vast amount of data. Hence a novel Secured lossless compression approach proposed in this paper is based on reversible integer wavelet transform, SPIHT algorithm, new modified runlength coding for character representation and selective bit scrambling. The use of the lifting scheme allows to generate truly lossless integer-to-integer wavelet transforms. Images are compressed/decompressed by well-known SPIHT algorithm. The proposed modified runlength coding greatly improves the compression performance and also increases the security level. This work employs scrambling method which is fast, simple to implement and it provides security. Lossless compression ratios and distortion performance of this proposed method are found to be better than other lossless techniques.

4 citations

Proceedings ArticleDOI
11 Jul 2010
TL;DR: Simulation results proved that the proposed compression algorithm called response dependent algorithm out performs the existing methods.
Abstract: The JPEG image compression standard is very sensitive to errors. Even though it contains error resilience features, it cannot easily cope with induced errors from computer soft faults prevalent in remote-sensing applications. Hence, new fault tolerance detection methods are developed to sense the soft errors in major parts of the system while also protecting data across the boundaries where data flow from one subsystem to the other. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes new compression algorithm called response dependent algorithm. Simulation Results proved that the proposed method out performs the existing methods.

4 citations

Journal IssueDOI
TL;DR: A new method for designing predictors suitable for lossless image coding that demonstrated superior coding efficiency compared to the conventional method for minimizing the mean squared prediction errors, and was confirmed to achieve a coding rate of 0.37 bitspel lower than JPEG-LS which is an international standard for lossed image coding.
Abstract: In this paper, the authors propose a new method for designing predictors suitable for lossless image coding. In recent years, lossless coding systems based on optimal design of predictors for each image have been studied. In these systems, the linear prediction coefficients are determined so as to minimize the mean squared prediction errors. In lossless image coding, however, where the ultimate goal is to reduce the coding rate, minimizing the mean squared prediction errors does not necessarily yield the best results. Therefore, in order to reduce the coding rate directly, the authors attempted to formulate the amount of information on the prediction errors and design the predictors so as to minimize that value. Moreover, the image is divided into blocks, and these blocks are classified into multiple classes; multiple predictors for adaptive prediction are optimized at the same time by repeatedly executing the design of the predictor for each class and updating the class for each block, based on the cost representing the amount of information. In the results from a coding simulation, this system demonstrated superior coding efficiency compared to the conventional method for minimizing the mean squared prediction errors, and was confirmed to achieve a coding rate of 0.37 bitspel lower than JPEG-LS which is an international standard for lossless image coding. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(5): 90– 98, 2007; Published online in Wiley InterScience (). DOI 10.1002sscj.10318

4 citations

Proceedings ArticleDOI
08 Sep 2005
TL;DR: Experimental results for several standard video sequences show that multiframe motion compensation with optimal weighting outperforms regular 1-frame motion Compensation with gains up to 18.2% even for the case of just two past reference frames.
Abstract: In this paper we consider the problem of lossless compression of video sequences exploiting the temporal redundancy between frames. In particular, we present a technique performing motion compensation on more than one past frame. Each prediction component is optimally weighted to minimize the mean squared error of the residual. Experimental results for several standard video sequences show that multiframe motion compensation with optimal weighting outperforms regular 1-frame motion compensation with gains up to 18.2% even for the case of just two past reference frames.

4 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815