scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 1986"


Book ChapterDOI
01 Jan 1986
TL;DR: The subject of this book demonstrates how digital image-processing techniques can be used to produce information about the optical image that cannot be obtained in any other way.
Abstract: New discoveries in the life sciences are often linked to the development of unique optical tools that allow experimental material to be examined in new ways. We as microscopists are constantly searching for new techniques for extracting even more optical information from the material we work with, as the subject of this book aptly demonstrates. It is not surprising then that microscopists have begun to turn to computer technology in order to squeeze more information from their experimental images. Computer processing can be used to obtain numerical information from the microscope image that is more accurate, less time-consuming, and more reproducible than the same operations performed by other methods. Computer processing can be used to enhance the appearance of the microscope image, for example to increase contrast or to reduce noise, in ways that are difficult to duplicate using photographic or video techniques alone. When used to their fullest power, digital image-processing techniques can be used to produce information about the optical image that cannot be obtained in any other way.

258 citations


Journal ArticleDOI
TL;DR: A new gray-scale image coding technique has been developed, in which an extended DPCM approach has been combined with entropy coding, which has been implemented in a freeze-frame videoconferencing system which is now operational at IBM sites throughout the world.
Abstract: A new gray-scale image coding technique has been developed, in which an extended DPCM approach has been combined with entropy coding This technique has been implemented in a freeze-frame videoconferencing system which is now operational at IBM sites throughout the world Following image preprocessing, the two fields of the interlaced 512 x 480 pixel video frame are compressed sequentially with different algorithms The reconstructed image quality is improved by subsequent image postprocessing, the final reconstructed image being almost indistinguishable from the original image Typical gray-scale video images compress to about a half bit per pixel and transmit over 48 kbit/s dial-up telephone lines in about a half minute The gray-scale image processing and compression algorithms are described in this paper

25 citations



DOI
01 Aug 1986
TL;DR: Results suggest that adaptively updating the codebook could result in improved performance in the use of vector quantisation for interframe image coding for compression.
Abstract: In the paper results on the use of vector quantisation for interframe image coding for compression are presented. The initial stage in vector quantisation is the formation/extraction of vectors from the data source. Two vector formation schemes, generalized from intraframe image coding, are proposed and evaluated on a test image sequence. The first, direct three-dimensional block vector quantisation, applies vector quantisation to small 3-dimensional blocks of pixels extracted from sequences of image. In the second, the 3-dimensional Hadamard transform is first calculated on a block basis and is followed by vector quantisation of a subset of the Hadamard coefficients. Significantly improved results are obtained in the second case, as the components in the vectors are decorrelated in both the spatial and temporal directions. Three training sets are used for the simulations, and results suggest that adaptively updating the codebook could result in improved performance.

5 citations