scispace - formally typeset
Search or ask a question
Topic

Image file formats

About: Image file formats is a research topic. Over the lifetime, 10349 publications have been published within this topic receiving 102407 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The open source framework MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data and implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware.

21 citations

Patent
08 Jan 1998
TL;DR: In this paper, an image editing apparatus includes a recording medium for recording an image file and a scenario file, wherein the scenario file is formed by recording a replay order or a replay condition of the image file with a predetermined file format.
Abstract: An image editing apparatus includes a recording medium for recording an image file and a scenario file, wherein the scenario file is formed by recording a replay order or a replay condition of the image file with a predetermined file format, a scenario evaluating circuit for reading the scenario file from the recording medium and evaluating the replay order or the replay condition, an editor for editing the image file in response to an evaluation by the scenario evaluating circuit, and a recorder for recording the image file on the recording medium.

21 citations

Patent
22 Jan 1998
TL;DR: In this article, an electronic camera includes a sensor for capturing an image, an electronic image display for displaying the captured image, and a user interface for selectively enabling a quick view feature in which the image display is automatically turned on for a set period of time after an image is captured, and then automatically turned off, in order to quickly review the captured images.
Abstract: An electronic camera includes a sensor for capturing an image, an electronic image display for displaying the captured image, and a user interface for selectively enabling a quick view feature in which the image display is automatically turned on for a set period of time after an image is captured, and then automatically turned off, in order to quickly review the captured image. The camera further includes a processor for performing processing on the captured image and generating a processed image file therefrom, and a memory for storing the processed image file. When an erase command is provided to the processor, the processing of the image file is terminated and the partially completed image file is deleted from the second memory.

21 citations

Patent
13 May 2004
TL;DR: In this article, the authors proposed a method and a system for image data transfer that enable a client (image data receiving device) to receive an image file of the size matching the speed of a network from a server (Image data transmitting device) without any human's intervention.
Abstract: PROBLEM TO BE SOLVED: To provide a method and a system for image data transfer that enable a client (image data receiving device) to receive an image file of the size matching the speed of a network from a server (image data transmitting device) without any human's intervention SOLUTION: A plurality of files containing the same image data with different resolution and a file information file in which the sizes of the plurality of files are recorded respectively are recorded in an image data transmitting device and the file information file is transmitted to an image data receiving device Then the image data receiving device requests the image data transmitting device to transmit the file of the lowest resolution recorded in the image data transmitting device to the image data transmitting device and receives the file of the lowest resolution, measures the transfer time of the file to calculate the transfer speed of data between the image data transmitting device and the client from the transfer time; and the receiving device estimates the transfer time of a file of high resolution among the plurality of files according to the calculated transfer speed and receives the file of high resolution when the estimated time is within the longest permissible time previously set in the image data receiving device COPYRIGHT: (C)2004,JPO

21 citations

Journal ArticleDOI
TL;DR: Jpeg 2000 may be considered a format that can guarantee an efficient robustness to bit errors and offers a valid quality with transmission or physical errors: this point of view is confirmed by the case study results that are reported in this article, concerning image quality after occurrence of random errors.
Abstract: Digital preservation requires a strategy for the storage of large quantities of data, which increases dramatically when dealing with high resolution images. Typically, decision-makers must choose whether to keep terabytes of images in their original TIFF format or compress them. This can be a very difficult decision: to lose visual information though compression could be a waste of the money expended in the creation of the digital assets; however, by choosing to compress, the costs of storage will be reduced. Wavelet compression of JPEG 2000 produces a high quality image: it is an acceptable alternative to TIFF and a good strategy for the storage of large image assets. Moreover, JPEG 2000 may be considered a format that can guarantee an efficient robustness to bit errors and offers a valid quality with transmission or physical errors: this point of view is confirmed by the case study results that we report in this article, concerning image quality after occurrence of random errors by a comparison among different file formats. Easy tools and freeware software can be used to improve format robustness by duplicating file headers inside or outside the image file format, enhancing the role of JPEG 2000 as a new archival format for high quality images. Introduction: current trends In recent years the JPEG 2000 format has been widely used in digital libraries, not only as a \"better\" JPEG to deliver medium-quality images, but also as new \"master\" file for high quality images, replacing TIFF1 images. One of the arguments used for this policy was the \"lossless mode\" feature of JPEG 2000; but this type of compression saves only about the half of the storage http://www.dlib.org/dlib/july08/buonora/07buonora.html requirements of TIFF, so it is unlikely that this was the only reason that digital libraries moved in this direction. The only reasonable choice was the standard lossy compression, which offers a 1:20 (color) or 1:10 (grayscale) ratio. This provides a significant savings in terms of storage, considering that the quality of images in digitization projects has increased dramatically in the past few years: the highest standards for image capture are now very common in digital libraries. Thus, the argument turned from the \"mathematically lossless\" concept to a softer \"visually lossless\" definition, and the question became: what do we lose in choosing the JPEG 2000 \"lossy\" mode? Let's focus on the following definitions: \"The image file will not retain the actual RGB color data, but it will look the same because screens and our eyes are so forgiving\"2 \"... many repositories are storing \"visually lossless\" JPEG 2000 files: the compression is lossy and irreversible but the artifacts are not noticeable\"3 As mentioned above, some institutions began to store JPEG 2000 files in their digital repositories as the \"archival format\"4. This policy was sometimes officially declared, or in some cases was adopted de facto. \"The migration process involves creating a derivative master from the original archival master...\" or, as shown in the example of the following migration rationale: \" Create JPEG 2000 datastream for presentation and standardize on JPEG 2000 as an archival master format. \"5 One of the most relevant and specific examples of format migration to JPEG 2000 was made at the Harvard University Library (HUL):6 \"HUL chose to perform a migration of various image files to the JPEG 2000 format. There is great local interest at Harvard in the retrospective conversion of substantial numbers of existing TIFF images to enhance their utility by permitting the dynamic image manipulation facilitated by the JPEG 2000 format. The three goals that guided the design of the migration were: To preserve fully the integrity of the GIF, JPEG, and TIFF source data when transformed into the JPEG 2000 (JP2) format To maximize the utility of the new JP2 objects To minimize migration costs\" The Xerox Research Center, namely Robert Buckley, was involved in this strategy, producing studies about the integration of JPEG 2000 in the OAIS Reference Model and defining it as a digital preservation standard.7 Although Buckley's Technology Watch Report has been accepted and promoted by the Digital Preservation Coalition in the UK, many relevant experts in this http://www.dlib.org/dlib/july08/buonora/07buonora.html field still seem to show some skepticism and continue to take a \"wait and see\" position:8 \"... some institutions engaged in large-scale efforts are considering a switch to JPEG 2000 ... However, the standard is not yet commonly used and there is not sufficient support for it by Web browsers. The number of tools available for JPEG 2000 is limited but continues to grow\".9 Tim Vitale's opinion on JPEG 2000 was very clear in his 2007 report:10 \"It is not an archival format ... Existing web browsers (mid-2007) are not yet JPEG 2000 capable. One of the biggest problems with the format is the need for viewing software to be added to existing web browsers ... There are very few implementations of the JPEG 2000 technology, more work needs to be done before general understanding and acceptance will be possible.\" However, this is no longer the case: most common commercial, digital imaging programs now support JPEG 2000, not to mention JPEG 2000 support by some excellent shareware.11 The real problem is that the JPEG 2000 format allows the storage of very large images, and no current programs can manage the computer memory in an intelligent way: this is the commercial reason for professional image servers and encoders, which are relatively costly,12 or specific viewers for geographic images (generally free13), or browser plug-ins (free as well). 1. Image compression of continuous tone images The primary objection to JPEG 2000 compression remains the possible loss of visual information. Our approach in arguing against this will not focus on how the wavelet approach works,14 but why it works, with some very basic elements of compression theory.15 In other words, preserving visual information deals mainly with how the images are perceived visually, and only secondarily deals with the mathematical aspects of the physical signal (materials, procedures, techniques). Some would argue that images look the same as they did before compression simply because humans don't see very well, and that a deeper examination (or a better monitor) would reveal errors and losses. This is not true: even when JPEG 2000 images are enhanced by magnification, no human could perceive any errors or losses. A digital surrogate is not necessarily a bad copy of the original, and compression does not always mean loss of information. Some people also may think that compression is the equivalent of the \"sampling\" of a signal; for example, if we choose 300 points per inch to represent an object, sub-sampling might take only 150 or 100 points instead, which creates the risk of losing some information essential for reconstructing that signal. Any sampling below the Nyquist rate produces aliasing effects: if we represent the signal as a wave, the sampling interval should match exactly the shape of the wave. Otherwise, original images are \"misunderstood\" and appear as artifacts. But compression is not a kind of sub-sampling made after the capture of an image. http://www.dlib.org/dlib/july08/buonora/07buonora.html We can either eliminate redundant information (a sequence of identical values), or we can have some kind of lossless compression, but below the physicalmathematical reality, we can operate on the human perception of it. Since we are dealing with the information that we perceive with our eyes, we can compress irrelevant information, i.e., what is less relevant to our senses. The human eye is less sensitive to colors than to light, so the chrominance signal can be compressed more than the luminance signal can, without any loss of perception. This is very important with digital images of historical documents, as they are usually either color or grayscale images, i.e., \"continuous tone\" images. As opposed to a \"discrete tone\" image (as a printed or typed document in black and white), in a continuous tone image any variation of adjacent pixels is relevant: in other words, pixels are \"correlated\" with each other. We cannot retrieve a sequence of identical values to compress, and we need a more sophisticated strategy. We can select a part of the image, an array of pixels, and calculate the average of the values; then, we can calculate the difference of any single value from the average. This is called \"de-correlation\" of the image pixels, and at the end of this process we will find that many of the differences from the basic average value are 0, or almost zero, so we can easily compress the image by assigning them the same values (quantization).16 When we separate the three color channels, each of them can be considered as a grayscale image, and we can use the \"bit planes\" technique.17 For example, let us take three adjacent pixels in a grayscale image, with very different values, in a decimal and in a binary code: Figure 1: An 8-bit grayscale image and its bit planes. 10 = 000001010 3 = 000000011 -7 = 100000111 The image is at 8-bit depth, so we have 1+8 bits (the first represents +/sign). At positions 2,3,4,5 (i.e. at bit-plane 2,3,4,5) we find only \"0\", and at position 8 find only \"1\": this is also expressed by saying that the relevance of the information or energy (low frequencies) concentrates at certain levels, and the other levels (high frequencies) can be easily compressed.18 This is very clear in http://www.dlib.org/dlib/july08/buonora/07buonora.html the following representation of an image in 8-bit planes: continuous tone variations between adjacent pixels are now turned in eight separate contexts, where it is now possible to compress adjacent values. Figure 2: A corrupted JPEG file. There are two main methods for de-correlating pixe

21 citations


Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
81% related
Software
130.5K papers, 2M citations
80% related
Feature extraction
111.8K papers, 2.1M citations
78% related
Deep learning
79.8K papers, 2.1M citations
78% related
The Internet
213.2K papers, 3.8M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20238
202222
2021124
2020271
2019375
2018384