Topic
Lossless JPEG
About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.
Papers published on a yearly basis
Papers
More filters
••
18 Jun 1999TL;DR: Any image can be segmented into regions containing information without the background that can be added at the reconstruction stage thus providing a lossless compression of approximately 3:1 in most cases, and an extremely efficient and powerful lossless coding model yielding compression ratios up to 9:1 can be developed.
Abstract: Any image can be segmented into regions containing information without the background that can be added at the reconstruction stage thus providing a lossless compression of approximately 3:1 in most cases. When this segmentation process is combined with an adaptive arithmetic coding model, an extremely efficient and powerful lossless coding model yielding compression ratios up to 9:1 can be developed. Such models can be quite useful for archiving wide varieties of biomedical images losslessly.
4 citations
••
21 May 2010TL;DR: An algorithm which uses luminance-DC-frequency-coefficients-scale differential algorithm to obtain the width of a header-missing JPEG fragment without using any original meta-data is presented.
Abstract: The size field in encoded JPEG file header is very important to the image displaying, so it should be designated to avoid the problem of confusedly displaying. The current measure of constructing of a pseudo header does not deal with this issue very well. To address this problem, the paper presents an algorithm which uses luminance-DC-frequency-coefficients-scale differential algorithm to obtain the width of a header-missing JPEG fragment without using any original meta-data. We use both the algorithm and the construction of decipher prerequisites to get the value. The results of experiments have shown that they can obtain the width information and the location of 8∗8 blocks sequence of header-missing JPEG fragment with resilience and accuracy.
4 citations
••
20 Oct 2004TL;DR: The method proposed is primarily a two-pass variation of the JPEG-LS compression algorithm and the algorithm is made search-aware, which can be easily adapted for other compression methods such as lossless JPEG and CALIC.
Abstract: With increasing amounts of image data, such as the satellite images, being stored in compressed format, efficient retrieval of the images has become a major concern. For satellite images, one typical problem is, given an image pattern, we need quickly to locate its matches in the image. It is highly desirable that the compressed images are not decompressed when the matching is being performed. The problem is generally referred as the "two dimensional compressed pattern matching" problem. We report our recent work on compressed pattern matching in JPEG-LS compressed images. The method proposed is primarily a two-pass variation of the JPEG-LS compression algorithm and the algorithm is made search-aware. Experimental results show that the two-pass compression algorithm, unlike other search-aware compression algorithms, does not sacrifice compression. The proposed approach can be easily adapted for other compression methods such as lossless JPEG and CALIC.
4 citations
••
TL;DR: This paper presents a new method for encoding the multiwavelet decomposed images by defining coefficients suitable for SPIHT algorithm which gives better compression performance over the existing methods in many cases.
Abstract: Advances in wavelet transforms and quantization methods have produced algorithms capable of surpassing the existing image compression standards like the Joint Photographic Experts Group (JPEG) algorithm. The existing compression methods for JPEG standards are using DCT with arithmetic coding and DWT with Huffman coding. The DCT uses a single kernel where as wavelet offers more number of filters depends on the applications. The wavelet based Set Partitioning In Hierarchical Trees (SPIHT) algorithm gives better compression. For best performance in image compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry, but they cannot simultaneously possess all of these properties. The relatively new field of multiwavelets offer more design options and can combine all desirable transform features. But there are some limitations in using the SPIHT algorithm for multiwavelets coefficients. This paper presents a new method for encoding the multiwavelet decomposed images by defining coefficients suitable for SPIHT algorithm which gives better compression performance over the existing methods in many cases.
4 citations
••
13 Feb 2006
TL;DR: The proposed key scheme exploits the information contained in a codestREAM and the features invariant under truncations to minimize the file size overhead for DRM applications yet preserve correct derivation of keys for descendants even when an encrypted codestream is truncated.
Abstract: JPEG 2000 provides multiple scalable accesses to a single codestream. Digital Rights Management of a JPEG 2000 codestream should preserve the original flexibility of scalability yet provide a mechanism to ensure what you see is what you pay: a low resolution version displayed on a smart phone should pay less than a high resolution version displayed on a PC. We present an efficient key scheme for multi-type, multilevel scalable access control for JPEG 2000 and motion JPEG 2000 codestreams. The scheme is based on a poset representation of the scalable access control and a hash based hierarchical access key scheme, both proposed elsewhere. The proposed key scheme exploits the information contained in a codestream and the features invariant under truncations to minimize the file size overhead for DRM applications yet preserve correct derivation of keys for descendants even when an encrypted codestream is truncated.
4 citations