scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 1993"


Journal ArticleDOI
TL;DR: New and conceptually very simple ways of estimating the entropy of an ergodic stationary source as well as new insight into the workings of such well-known data compression schemes as the Lempel-Ziv algorithm are presented.
Abstract: Some new ways of defining the entropy of a process by observing a single typical output sequence as well as a new kind of Shannon-McMillan-Breiman theorem are presented. This provides a new and conceptually very simple ways of estimating the entropy of an ergodic stationary source as well as new insight into the workings of such well-known data compression schemes as the Lempel-Ziv algorithm. >

305 citations


Proceedings ArticleDOI
30 Mar 1993
TL;DR: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed, based on a novel use of two neighboring pixels for both prediction and error modeling.
Abstract: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter. >

259 citations


Paul G. Howard1
02 Jan 1993
TL;DR: It is shown that high-efficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding, and that greatly increased speed can be achieved at only a small cost in compression efficiency.
Abstract: Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that high-efficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a four-component paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed comes from simplified coding and modeling. Coding is simplified by using prefix codes when arithmetic coding is not necessary, and by using a new practical version of arithmetic coding, called quasi-arithmetic coding, when the precision of arithmetic coding is needed. We simplify image modeling by using small prediction contexts and making plausible assumptions about the distributions of pixel intensity values. For text modeling we use self-organizing-list heuristics and low-precision statistics.

133 citations


Patent
Michael J. Gormish1, James D Allen1
22 Oct 1993
TL;DR: In this paper, an encoding and decoding apparatus is used for the compression and expansion of data. But it is not a state machine, and each state has at least one transition pair, each element of the transition pair comprises zero or more bits representative of the compact code to be output and the identification of the next state to proceed to.
Abstract: The present invention provides an encoding and decoding apparatus used for the compression and expansion of data. A state machine is provided having a plurality of states. Each state has at least one transition pair. Each element of the transition pair comprises zero or more bits representative of the compact code to be output and the identification of the next state to proceed to. The transition pair reflects an output for a yes and no response associated with the probability of the data to be compacted and whether the data falls within that probability.

74 citations


Journal ArticleDOI
TL;DR: The authors' study is unlike previous studies of the effects of lossy compression in that they consider nonbinary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, and use statistical methods that are more appropriate for nonbinary clinical data than are the popular receiver operating characteristic curves.
Abstract: The authors apply a lossy compression algorithm to medical images, and quantify the quality of the images by the diagnostic performance of radiologists, as well as by traditional signal-to-noise ratios and subjective ratings. The authors' study is unlike previous studies of the effects of lossy compression in that they consider nonbinary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, use statistical methods that are more appropriate for nonbinary clinical data than are the popular receiver operating characteristic curves, and use low-complexity predictive tree-structured vector quantization for compression rather than DCT-based transform codes combined with entropy coding. The authors' diagnostic tasks are the identification of nodules (tumors) in the lungs and lymphadenopathy in the mediastinum from computerized tomography (CT) chest scans. Radiologists read both uncompressed and lossy compressed versions of images. For the image modality, compression algorithm, and diagnostic tasks the authors consider, the original 12 bit per pixel (bpp) CT image can be compressed to between 1 bpp and 2 bpp with no significant changes in diagnostic accuracy. The techniques presented here for evaluating image quality do not depend on the specific compression algorithm and are useful new methods for evaluating the benefits of any lossy image processing technique. >

67 citations


Journal ArticleDOI
TL;DR: Two versions of the coder are developed: an optimal encoder based on dynamic programming arguments, and a suboptimal heuristic based on arithmetic coding that achieves compression that is within a constant of a perfect entropy coder for independent and identically distributed inputs.
Abstract: We introduce "Block Arithmetic Coding" (BAC), a technique for entropy coding that combines many of the advantages of ordinary stream arithmetic coding with the simplicity of block codes. The code is variable length in to fixed out (V to F), unlike Huffman coding which is fixed in to variable out (F to V). We develop two versions of the coder: 1) an optimal encoder based on dynamic programming arguments, and 2) a suboptimal heuristic based on arithmetic coding. The optimal coder is optimal over all V to F complete and proper block codes. We show that the suboptimal coder achieves compression that is within a constant of a perfect entropy coder for independent and identically distributed inputs. BAC is easily implemented, even with large codebooks, because the algorithms for coding and decoding are regular. For instance, codebooks with 2/sup 32/ entries are feasible. BAC also does not suffer catastrophic failure in the presence of channel errors. Decoding errors are confined to the block in question. The encoding is in practice reasonably efficient. With i.i.d. binary inputs with P(1)=0.95 and 16 bit codes, entropy arguments indicate at most 55.8 bits can be encoded; the BAC heuristic achieves 53.0 and the optimal BAC achieves 53.5. Finally, BAC appears to be much faster than ordinary arithmetic coding. >

31 citations


Proceedings ArticleDOI
30 Mar 1993
TL;DR: The authors examine the resource requirements and compression efficiency of the coding phase, concentrating on applications with medium and large alphabets, and Huffman coding, finding that it is faster than arithmetic coding in most situations.
Abstract: The authors examine the resource requirements and compression efficiency of the coding phase, concentrating on applications with medium and large alphabets. When semi-static two-pass encoding can be used, Huffman coding is two to four times faster than arithmetic coding, and sometimes results in superior compression. When an adaptive coder is required the difference in speed is smaller, but Gallager's implementation of dynamic Huffman coding is still faster than arithmetic coding in most situations. The compression loss through the use of Huffman codes is negligible in all but extreme circumstances. Where very high speed is necessary splay coding is also worth considering, although it yields poorer compression. >

26 citations


Journal ArticleDOI
TL;DR: A generic video codec, able to compress image sequences efficiently regardless of their input formats, is presented, which can handle both interlaced and progressive input sequences, the temporal redundancies being exploited by interframe/interfield coding.
Abstract: We present a generic video codec, which is able to compress image sequences efficiently regardless of their input formats. In addition, this codec supports a wide range of bit rates, without significant changes in its main architecture. A multiresolution representation of the data is generated by a Gabor-like wavelet transform. The motion estimation is performed by a locally adaptive multigrid block-matching technique. This codec can handle both interlaced and progressive input sequences, the temporal redundancies being exploited by interframe/interfield coding. A perceptual quantization of the resulting coefficients is then performed, followed by adaptive entropy coding. Simulations using different test sequences demonstrate a reconstructed signal of good quality for a wide range of bit rates. Thereby demonstrating that this codec can perform generic coding with reduced complexity and high efficiency.

25 citations


Proceedings ArticleDOI
30 Mar 1993
TL;DR: This proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook, and the chosen codebook's codewords are then used to encode the resulting residuals.
Abstract: Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense. >

23 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution.
Abstract: Rice developed a universal noiseless coding structure that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Variations of such noiseless coders have been used in many NASA applications. Custom VLSI coder and decoder modules capable of processing over 50 million samples per second have been fabricated and tested. In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution. Except for the default option, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery over a wide entropy range, and they confirm the optimality of the scheme. Comparison with other known techniques are performed on several widely used images and the results further validate the coder's optimality.

22 citations


Proceedings ArticleDOI
30 Mar 1993
TL;DR: Tunstall or Huffman codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols.
Abstract: Such codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols. Compression exceeds entropy by the amount of the informational divergence, between source words and code words, divided by the expected source-word length. The asymptotic optimality of Tunstall or Huffman codes derives from the bounding of divergence while the expected source-word length is made arbitrarily large. A heuristic extension scheme is asymptotically optimal but also acts to reduce the divergence by retaining those source words which are well matched to their corresponding code words. >

Patent
22 Jan 1993
TL;DR: In this article, the zerotree structure is described in the context of a pyramid-type image subband processor together with successive refinement quantization and entropy coding to facilitate data compression.
Abstract: A data processing system augments compression of non-zero values of significant coefficients by coding entries of a significance map independently of coding the values of significant non-zero coefficients. A dedicated symbol (from 1038) represents a zerotree structure encompassing a related association of insignificant coefficients within the tree structure. The zerotree symbol represents that a coefficient is a root of a zerotree if, at a threshold (T), the coefficient (from 1002) and all of its descendants that have been found to be insignificant at larger thresholds have magnitudes less than threshold T. The zerotree structure is disclosed in the context of a pyramid-type image subband processor together with successive refinement quantization and entropy coding to facilitate data compression.

01 Jan 1993
TL;DR: A method of universal binary entropy coding using simple state machines is presented and all state machines presented have a bounded required lookahead to decode each binary decision.
Abstract: A method of universal binary entropy coding using simple state machines is presented. We provide a natural rationale for the use of state machines. We discuss the building of state machines which produce uniquely decodable bit streams. Some state machines implement known codes, while others implement finite or even infinite variable length to variable length codes. Once a decodable state machine is found, it is adjusted for maximum compression. Compression results are given for both single context and multiple context coding. All state machines presented have a bounded required lookahead to decode each binary decision. In addition, the machines presented can easily provide joint source channel coding for constrained (e.g., run length limited) channels.

Proceedings ArticleDOI
03 May 1993
TL;DR: To speed up the process of search for a symbol in a Huffman tree and to reduce the memory size, a tree clustering algorithm is proposed to avoid high sparsity of the tree.
Abstract: Code compression is a key element in high speed digital data transport. A major compression is performed by converting the fixed-length codes to variable-length codes through a (semi-)entropy coding scheme. Huffman coding combined with run-length coding is shown to be a very efficient coding scheme. To speed up the process of search for a symbol in a Huffman tree and to reduce the memory size, a tree clustering algorithm is proposed to avoid high sparsity of the tree. The method is shown to be very efficient in memory size, and fast in searching for the symbol. For an experimental video data with Huffman codes extended up to 13 bits in length, the entire memory space is shown to be 126 words, compared to normal 2/sup 13/ = 8192 words. >

Journal ArticleDOI
TL;DR: The use of data compression to reduce bandwidth and reduce storage requirements, and a simple method for lossless compression, runlength encoding, is described, as are the more sophisticated Huffman codes, arithmetic coding, and the trie-based codes.
Abstract: The use of data compression to reduce bandwidth and reduce storage requirements is discussed. The merits of lossless versus lossy compression techniques, the latter offering far greater compression ratios, are considered. The limits of lossless compression are discussed, and a simple method for lossless compression, runlength encoding, is described, as are the more sophisticated Huffman codes, arithmetic coding, and the trie-based codes invented by A. Lempel and J. Ziv (1977, 1978), WAN applications as well as throughput and latency are briefly considered. >

Journal ArticleDOI
TL;DR: An analytical criterion evaluating the performances of any perfect reconstruction linear transform in the frame of scene adaptive coding is derived, thereafter, used in order to optimize linear multiresolution transforms.
Abstract: Scene adaptive coders are constituted by the cascade of a linear transform, scalar quantization, entropy coding and a buffer controlled by a feedback loop for bit rate regulation. The main contribution of this paper is to derive an analytical criterion evaluating the performances of any perfect reconstruction linear transform in the frame of scene adaptive coding. This criterion is, thereafter, used in order to optimize linear multiresolution transforms. The optimization adapts the filters parameters to the codec features and to the statistics of the 2-D sources; so, the authors call these transforms adapted multiresolution transforms (AMTs). The transforms under study are implemented by a cascade of separable perfect-reconstruction (PR) FIR two-band filter banks that can change at each resolution level. Two types of filter banks are envisaged: the PR orthogonal quadrature mirror filter (QMF) bank, which allows to implement the orthogonal AMT and the PR linear-phase filter (LPF) bank, which implements the biorthogonal AMT. They perform the optimization of the filters in their factorized lattice form, taking the finite length of the multipliers into account. Their criterion also allows them to show the performances achieved by these two linear multiresolution transforms compared to other linear (multiresolution) transforms. >

Proceedings ArticleDOI
16 Aug 1993
TL;DR: It is necessary to preprocess the images in order to reduce the amount of correlation among neighboring pixels, thereby the compression ratio is improved and some lossless compression techniques in combination with preprocessing methods are examined.
Abstract: Data compression deals with representing information in a succinct way. In view that the major lossless or error-free compression methods like Huffman, arithmetic and Lempel-Ziv coding do not achieve great compression ratios, it is necessary to preprocess the images in order to reduce the amount of correlation among neighboring pixels, thereby the compression ratio is improved. These preprocessing methods could achieve reduction of image entropy in the spatial domain, or in the spatial frequency domain. The performance of some lossless compression techniques in combination with preprocessing methods is examined. >

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A novel segmentation-based method for coding motion-compensated prediction error images (PEIs) using dynamic thresholding and morphological operations that outperforms the DCT-based algorithm in terms of both PSNR and subjective visual quality.
Abstract: A novel segmentation-based method for coding motion-compensated prediction error images (PEIs) is described. The PEIs result from various motion-compensated techniques, e.g., block matching and pel-recursive techniques. A detailed study on the statistics of these kinds of images is carried out and shows that the correlation in the PEIs is very low compared with that in typical natural images. Therefore, the conventional transform coding or subband coding is not appropriate for the PEIs. The proposed method segments a PEI using dynamic thresholding and morphological operations. Various morphological operators are applied, resulting in a final clean image and a relatively small number of segments. The contour and the interior region are coded separately using entropy coding techniques. Comparisons with the DCT (discrete cosine transform) show that the proposed algorithm outperforms the DCT-based algorithm in terms of both PSNR and subjective visual quality. >

Journal ArticleDOI
TL;DR: In this article, a hexagonally sampled image is split into a low passband and nine passbands of one octave width and 60 deg angular orientation, and the conditions to be satisfied by the filter banks for perlect reconstruction are presented.
Abstract: A hexagonally sampled image is split into a low passband and nine passbands of one octave width and 60 deg angular orientation. The conditions to be satisfied by the filter banks for perlect reconstruction are presented. The human visual system's response to stimuli at differing spatial frequencies is then employed to shape the coding noise spectrum. Rate is allocated under a frequency-weighted mean-square-error distortion measure. A framework is presented employing either the power spectral density of the image or the variance of the subbands. Both adaptive and nonadaptive entropy coding simulations are carried out under a Laplacian source distribution. Transparent coding results are presented at rates below 1 bit/pixel.

Journal ArticleDOI
TL;DR: A new filter bank design method for image coding applications and a new entropy coding algorithm for the compression of subband images that shows improved performance when compared with other existing methods.
Abstract: A method for image compression based on subband decomposition is presented. We describe a new filter bank design method for image coding applications and a new entropy coding algorithm for the compression of subband images. A set of relevant optimization criteria is defined for the filter bank design. For the compression, a composite source model is defined by combining vector quantization (VQ) and scalar quantization (SQ) with entropy coding. In the proposed scheme, VQ exploits the remaining statistical dependencies among the subband samples, while SQ allows an optimal control on local distortions. The system is based on a statistical model that uses VQ information to generate low entropy probability tables for an arithmetic coder. The bit rate can be shared between the VQ rate and the SQ rate, allowing many possible configurations in terms of performance and implementation complexity. The proposed system shows improved performance when compared with other existing methods.

Patent
28 Dec 1993
TL;DR: In this article, an upper limit of a quantization step was set to obtain the image quality allowable visually so as to suppress the step to be less than the upper limit, and the quantization phase was executed in a frame arrangement setting.
Abstract: PURPOSE: To suppress image quality deterioration in space by setting an upper limit of a quantization step to obtain the image quality allowable visually so as to suppress the step to be less than the upper limit CONSTITUTION: A quantization step upper limit setting section 109 is msed to set an upper limit of a quantization step so as to obtain i,age quality allowed visually through quantization to a frame arrangement setting section 101 At first a 0-th frame is inputted to a video input section 100 and it is fed to a DCT section 105 applying orthogonal transformation to a difference between a processing object frame and a prediction frame by using a frame processing order revision section Thus, a DCT section 105 transforms the frequency area for each block, and a quantization section 106 adds weighting for each frequency band to implement quantization and a quantized coefficient is subject to entropy coding by a coding section 107 and the result is outputted from a coding data output section 113 When a qtuantization step from the coding section 107 exceeds an upper limit, the quantization step set to a setting section 109 is executed COPYRIGHT: (C)1995,JPO


Journal ArticleDOI
TL;DR: Several parallel pipelined digital signal processor (DSP) architectures that implement the fast cosine transform (FCT)-based Joint Photographers Expert Group (JPEG) still picture image compression algorithm with arithmetic coding for entropy coding are described.
Abstract: Several parallel pipelined digital signal processor (DSP) architectures that implement the fast cosine transform (FCT)-based Joint Photographers Expert Group (JPEG) still picture image compression algorithm with arithmetic coding for entropy coding are described. The extended JPEG image compression algorithm's average execution time, when compressing and decompressing a 256*256 pixel monochrome still image, varied from 0.61 s to 0.12 s in architectures that contained from one to six processors. A common bus DSP multiprocessor system capable of meeting the critical timing requirements of digital image compression/decompression applications is also presented. In an effort to maximize DSP utilization, a simple static load distribution method is provided for assigning the load to the individual DSPs. These parallel pipelined DSP architectures can be used for a wide range of applications, including the MPEG implementation for video coding. >

Proceedings ArticleDOI
17 Jan 1993
TL;DR: In a recent paper submitted to IEEE Transactions on Information Theory, BAC is a variable to fixed block coder in that the input is parsed into variable length substrings which are encoded with fixed length output strings.
Abstract: In a recent paper submitted to IEEE Transactions on Information Theory [l], we introduced BAC. BAC is a variable to fixed block coder in that the input is parsed into variable length substrings which are encoded with fixed length output strings. Assume the input is taken from an alphabet with m symbols and the codebook has K codewords. With each input symbol, the encoder splits the set of codewords into m disjoint, nonempty subsets. The recursion continues until fewer than m codewords remain. One of these is transmitted, and the encoder reinitialized. The encoding process is described in Figure 1.

Book ChapterDOI
01 Jan 1993
TL;DR: This chapter presents a brief overview of basic video coding techniques, which are common to most video compression systems, and presents three samples to illustrate how these techniques can be tailored for specific applications.
Abstract: In this chapter, we present a brief overview of basic video coding techniques, which are common to most video compression systems The coding techniques reviewed include quantization, predictive coding, entropy coding, orthogonal transform, motion estimation/compensation, and subband processing After the brief overview of these techniques, we present three sample video compression systems to illustrate how these techniques can be tailored for specific applications

Proceedings ArticleDOI
06 Sep 1993
TL;DR: The experimental results have shown that the proposed method based on adaptive Huffman coding with an extended source alphabet yields better compression on Chinese text files.
Abstract: The compression method for Chinese text files proposed in this paper is based on a single pass data compression technique, adaptive Huffman coding. All Chinese text files to be compressed are modeled to contain not only ASCII characters, Chinese ideographic characters and punctuation marks, but also commonly used Chinese character pairs. The approach of using a static dictionary is employed to maintain about 3000 most frequently occurring character pairs found in general Chinese texts. This is to define the extension to the standard source alphabet in ideogram-based adaptive Huffman coding. The performance in compression ratio and CPU execution time of the proposed method is evaluated against those of the adaptive byte-oriented Huffman coding scheme, the adaptive ideogram-based Huffman coding scheme, and the adaptive LZW method. The experimental results have shown that the proposed method based on adaptive Huffman coding with an extended source alphabet yields better compression on Chinese text files.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: It is shown that the coding performance of a multistage quantized subband is inferior to that of direct (i.e., single stage) quantization and conditional entropy coding and conditional uniform quantization partly eliminate this loss.
Abstract: The authors thoroughly investigate the performance of multistage scalar quantization and investigate three encoding strategies: concatenated coding, conditional entropy coding, and conditional uniform quantization. It is shown that the coding performance of a multistage quantized subband is inferior to that of direct (i.e., single stage) quantization. Conditional entropy coding and conditional uniform quantization partly eliminate this loss. >

Patent
28 Jun 1993
TL;DR: In this article, the capacity of a buffer memory for converting scanning sequence from a zigzag sequence into a raster sequence was reduced by providing a run length decoding means and a scanning sequence conversion means converting a scan sequence in a block into a prescribed scanning sequence in the processing unit.
Abstract: PURPOSE: To reduce the capacity of a buffer memory for converting scanning sequence from a zigzag sequence into a raster sequence by providing a run length decoding means and a scanning sequence conversion means converting a scanning sequence in a block into a prescribed scanning sequence in the processing unit CONSTITUTION: Just after a compression data string Zdata subjected to entropy coding and run length coding is given to a variable length coding decoder VLD, in which a variable length code is decoded, a run length decoding means RLD is used to decode the compression data string Vdata to restore the data to a data series using sets (z, d) of the number of preceding zeros (z) and data value (d) other than zero Then a scan sequence conversion means ZIGZAG is used to use, eg buffer memories CMEM0, CMEM1 to convert the scanning sequence from the zigzag sequence into a raster sequence and raster series data are subjected to inverse quantization and inverse orthogonal transformation for decoding COPYRIGHT: (C)1996,JPO

Proceedings ArticleDOI
30 Mar 1993
TL;DR: An artificial neural network is used to develop entropy-biased codebooks which yield substantial data compression without entropy coding and are very robust with respect to transmission channel errors.
Abstract: The authors demonstrate the use of a differential vector quantization (DVQ) architecture for the coding of digital images. An artificial neural network is used to develop entropy-biased codebooks which yield substantial data compression without entropy coding and are very robust with respect to transmission channel errors. Two methods are presented for variable bit-rate coding using the described DVQ algorithm. In the first method, both the encoder and the decoder have multiple codebooks of different sizes. In the second, variable bit-rates are achieved by using subsets of one fixed codebook. The performance of these approaches is compared, under conditions of error-free and error-prone channels. Results show that this coding technique yields pictures of excellent visual quality at moderate compression rate. >

Patent
04 Oct 1993
TL;DR: In this paper, a picture encoding system capable of reversibly restoring original picture data with high compression efficiency is proposed, where picture data consisting of plural picture element data are inputted through a picture data input part 11 and an estimation part 13 estimates the density value of the noted picture element.
Abstract: PURPOSE:To provide a picture encoding system capable of reversibly restoring original picture data with high compression efficiency. CONSTITUTION:Picture data consisting of plural picture element data are inputted through a picture data input part 11. An estimation part 13 estimates the density value of the noted picture element. A difference calculation part 14 obtains the difference value between the density value of the original picture element and the estimated value and classifies the difference value and the estimate value. A code table preparing part 12 prepares a code table so as to properly encode each estimate value class. A code table selecting part 17 detects the estimate value class of the noted picture element and selects the code table corresponding to the estimate value class. An entropy encoding part 18 performs the optimal encoding with the use of the code table.