scispace - formally typeset
Search or ask a question

Showing papers on "High dynamic range published in 2003"


Proceedings ArticleDOI
01 Jul 2003
TL;DR: This paper describes the approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame, and how to compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.
Abstract: Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. We show examples for a variety of dynamic scenes. We also show how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.

641 citations


Journal ArticleDOI
TL;DR: In this article, an Arbitrated address-event imager was designed and fabricated in a 0.6-/spl mu/m CMOS process, which is composed of 80 /spl times/ 60 pixels of 32 /spltimes/ 30 /spl m/m. Tests conducted on the imager showed a large output dynamic range of 180 dB (under bright local illumination) for an individual pixel.
Abstract: An arbitrated address-event imager has been designed and fabricated in a 0.6-/spl mu/m CMOS process. The imager is composed of 80 /spl times/ 60 pixels of 32 /spl times/ 30 /spl mu/m. The value of the light intensity collected by each photosensitive element is inversely proportional to the pixel's interspike time interval. The readout of each spike is initiated by the individual pixel; therefore, the available output bandwidth is allocated according to pixel output demand. This encoding of light intensities favors brighter pixels, equalizes the number of integrated photons across light intensity, and minimizes power consumption. Tests conducted on the imager showed a large output dynamic range of 180 dB (under bright local illumination) for an individual pixel. The array, on the other hand, produced a dynamic range of 120 dB (under uniform bright illumination and when no lower bound was placed on the update rate per pixel). The dynamic range is 48.9 dB value at 30-pixel updates/s. Power consumption is 3.4 mW in uniform indoor light and a mean event rate of 200 kHz, which updates each pixel 41.6 times per second. The imager is capable of updating each pixel 8.3K times per second (under bright local illumination).

362 citations


Journal ArticleDOI
TL;DR: A new method is proposed for determining the camera's response function, which is an iterative procedure that need be done only once for a particular camera, and results in higher weight being assigned to pixels taken at longer exposure times.
Abstract: We present a new approach for improving the effective dynamic range of cameras by using multiple photographs of the same scene taken with different exposure times. Using this method enables the photographer to accurately capture scenes that contain high dynamic range by using a device with low dynamic range, which allows the capture of scenes that have both very bright and very dark regions. We approach the problem from a probabilistic standpoint, distinguishing it from the other methods reported in the literature on photographic dynamic range improvement. A new method is proposed for determining the camera's response function, which is an iterative procedure that need be done only once for a particular camera. With the response function known, high dynamic range images can be easily constructed by a weighted average of the input images. The particular form of weighting is controlled by the probabilistic formulation of the problem, and results in higher weight being assigned to pixels taken at longer exposure times. The advantages of this new weighting scheme are explained by com- parison with other methods in the literature. Experimental results are presented to demonstrate the utility of the algorithm. © 2003 SPIE

353 citations


Journal ArticleDOI
Greg Ward1
TL;DR: A three million pixel exposure can be aligned in a fraction of a second on a contemporary microprocessor using this technique, and the cost of the algorithm is linear with respect to the number of pixels and effectively independent of the maximum translation.
Abstract: In this paper, we present a fast, robust, and completely automatic method for translational alignment of hand-held photographs. The technique employs percentile threshold bitmaps to accelerate image operations and avoid problems with the varying exposure levels used in high dynamic range (HDR) photography. An image pyramid is constructed from grayscale versions of each exposure, and these are converted to bitmaps which are then aligned horizontally and vertically using inexpensive shift and difference operations over each image. The cost of the algorithm is linear with respect to the number of pixels and effectively independent of the maximum translation. A three million pixel exposure can be aligned in a fraction of a second on a contemporary microprocessor using this technique.

306 citations


Patent
26 Sep 2003
TL;DR: In this paper, a high dynamic range and high precision broadband optical inspection system and method are provided for high throughput substrate inspection in which the sides, bevels and edges of the substrate may be rapidly or simultaneously inspected for defects.
Abstract: A high dynamic range and high precision broadband optical inspection system (1) and method are provided. The system provides capability of optical inspection of patterned and unpatterned substrates (27) in which a very large dynamic range with very high precision is desirable to provide detection of light scattering defects from sub micron to hundreds of microns in size. The system permits high throughput substrate inspection in which the sides, bevels and edges of the substrate may be rapidly or simultaneously inspected for defects.

276 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe an approach to generate high dynamic range (HDR) video from an image sequenet using an off-the-shelf camcorder.
Abstract: Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequen...

229 citations


Journal ArticleDOI
01 May 2003
TL;DR: This paper provides a description of the technology as well as findings from a supporting psychological study that establishes that correction for the low resolution display through compensation in the high resolution display yields an image which does not differ perceptibly from that of a purely high resolution HDR display.
Abstract: We have developed an emissive high dynamic range (HDR) display that is capable of displaying a luminance range of 10,000cd/m2 to 0.1cd/m2 while maintaining all features found in conventional LCD displays such as resolution, refresh rate and image quality. We achieve that dynamic range by combining two display systems — a high resolution transmissive LCD and a low resolution, monochrome display composed of high brightness light emitting diodes (LED). This paper provides a description of the technology as well as findings from a supporting psychological study that establishes that correction for the low resolution display through compensation in the high resolution display yields an image which does not differ perceptibly from that of a purely high resolution HDR display.

204 citations


Proceedings ArticleDOI
Nayar1, Branzoi1
01 Jan 2003
TL;DR: In this paper, a real-time control algorithm is developed that uses acquired images to automatically adjust the transmittance function of the spatial modulator, which is used to compute a very high dynamic range image that is linear in scene radiance.
Abstract: This paper presents a new approach to imaging that significantly enhances the dynamic range of a camera. The key idea is to adapt the exposure of each pixel on the image detector, based on the radiance value of the corresponding scene point. This adaptation is done in the optical domain, that is, during image formation. In practice, this is achieved using a spatial light modulator whose transmittance can be varied with high resolution over space and time. A real-time control algorithm is developed that uses acquired images to automatically adjust the transmittance function of the spatial modulator. Each captured image and its corresponding transmittance function are used to compute a very high dynamic range image that is linear in scene radiance. We have implemented a video-rate adaptive dynamic range camera that consists of a color CCD detector and a controllable liquid crystal light modulator. Experiments have been conducted in scenarios with complex and harsh lighting conditions. The results indicate that adaptive imaging can have a significant impact on vision applications such as monitoring, tracking, recognition, and navigation.

188 citations


Proceedings Article
13 Oct 2003
TL;DR: A video-rate adaptive dynamic range camera that consists of a color CCD detector and a controllable liquid crystal light modulator is implemented and indicates that adaptive imaging can have a significant impact on vision applications such as monitoring, tracking, recognition, and navigation.
Abstract: This paper presents a new approach to imaging thatsignificantly enhances the dynamic range of a camera.The key idea is to adapt the exposure of each pixel onthe image detector, based on the radiance value of thecorresponding scene point. This adaptation is done inthe optical domain, that is, during image formation. Inpractice, this is achieved using a spatial light modulatorwhose transmittance can be varied with high resolutionover space and time. A real-time control algorithm isdeveloped that uses acquired images to automaticallyadjust the transmittance function of the spatial modulator. Each captured image and its corresponding transmittance function are used to compute a very high dynamic range image that is linear in scene radiance.We have implemented a video-rate adaptive dynamicrange camera that consists of a color CCD detector anda controllable liquid crystal light modulator. Experiments have been conducted in scenarios with complexand harsh lighting conditions. The results indicate thatadaptive imaging can have a significant impact on visionapplications such as monitoring, tracking, recognition,and navigation.

183 citations


01 Jan 2003
TL;DR: The algorithm proves that simple summation combines all the information in the individual exposures without loss and makes it possible to construct a table of optimal exposure values, which can be easily incorporated into a digital camera so that a photographer can emulate a wide variety of high dynamic range cameras by selecting from a menu.
Abstract: Many computer vision algorithms rely on precise estimates of scene radiances obtained from an image. A simple way to acquire a larger dynamic range of scene radiances is by combining several exposures of the scene. The number of exposures and their values have a dramatic impact on the quality of the combined image. At this point, there exists no principled method to determine these values. Given a camera with known response function and dynamic range, we wish to find the exposures that would result in a set of images that when combined would emulate an effective camera with a desired dynamic range and a desired response function. We first prove that simple summation combines all the information in the individual exposures without loss. We select the exposures by minimizing an objective function that is based on the derivative of the response function. Using our algorithm, we demonstrate the emulation of cameras with a variety of response functions, ranging from linear to logarithmic. We verify our method on several real scenes. Our method makes it possible to construct a table of optimal exposure values. This table can be easily incorporated into a digital camera so that a photographer can emulate a wide variety of high dynamic range cameras by selecting from a menu. 1 Capturing a Flexible Dynamic Range Many computer vision algorithms require accurate estimates of scene radiance such as color constancy [9], inverse rendering [13, 1] and shape recovery [17, 8, 18]. It is difficult to capture both the wide range of radiance values real scenes produce and the subtle variations within them using a low cost digital camera. This is because any camera must assign a limited number of brightness values to the entire range of scene radi∗This work was completed with support from a National Science Foundation ITR Award (IIS-00-85864) and a grant from the Human ID Program: Flexible Imaging Over a Wide Range of Distances Award No. N000-14-00-1-0929 (a) Small and large exposures combine to capture a high dynamic range (b) Similar exposures combine to capture suble variations Figure 1: Illustration showing the impact of the choice of exposure values on which scene radiances are captured. (a) When large and small exposures are combined the resulting image has a high dynamic range, but does not capture some scene variations. (b) When similar exposure values are combined, the result includes subtle variations, but within a limited dynamic range. In both cases, a set of exposures taken with a camera results in an “effective camera.” Which exposures must we use to emulate a desired effective camera? ances. The response function of the camera determines the assignment of brightness to radiance. The response therefore determines both the camera’s sensitivity to changes in scene radiance and its dynamic range. A simple method for extending the dynamic range of a camera is to combine multiple images of a scene taken with different exposures [6, 2, 3, 10, 11, 12, 15, 16]. For example, the left of Fig. 1(a) shows a small and a large exposure, each capturing a different range of scene radiances. The illustration on the right of Fig. 1(a) shows that the result of combining the exposures includes the entire dynamic range of the scene. Note that by using these exposures values we fail to capture subtle variations in the scene, such as the shading of the ball. Once these variations are lost they can not be restored by methods that change the brightness of an image, such as the recent work on tone mapping [4, 5, 14]. In Fig. 1(b), two similar exposures combine to produce an image that captures subtle variations, but within a limited dynamic range. As a result, in both Fig. 1(a) and (b), the images on the right can be considered as the outputs of two different “effective cameras.” The number and choice of exposures determines the dynamic range and the response of each effective camera. This relationship has been ignored in the past. In this paper we explore this relationship to address the general problem of determining which exposure values to use in order to emulate an effective camera with a desired response and a desired dynamic range. Solving this problem requires us to answer the following questions: • How can we create a combined image that preserves the information from all the exposures? Previous work suggested heuristics for combining the exposures [3, 11, 12]. We prove that even without linearizing the camera, simple summation preserves all the information contained in the set of individual exposures. • What are the best exposure values to achieve a desired effective response function for the combined image? It is customary to arbitrarily choose the number of exposures and the ratio (say, 2) between consecutive exposure values [3, 10, 11, 12]. For example, when this is done with a linear real camera, the resulting combined image is relatively insensitive to changes in large radiances. This can bias vision algorithms that use derivatives of radiance. Such biases are eliminated using our algorithm, which selects the exposure values to best achieve a desired response. • How can we best achieve a desired dynamic range and effective response function from a limited number of images? It is common to combine images with consecutive exposure ratios of 2 (see [3, 11, 12]). to create a high dynamic range image. With that choice of exposure ratio, is often necessary to use 5 or more exposures to capture the full dynamic range of a scene. This is impractical when the number of exposures that can be captured is limited by the time to acquire the images, changes in the scene, or resources needed to process the images. Our algorithm determines the exposure values needed to best emulate a desired camera with a fixed number of images. Our method allows us to emulate cameras with a wide variety of response functions. For the class of linear real cameras, we present a table of optimal exposure values for emulating high dynamic range cameras with, for example, linear and logarithmic (constant contrast) responses. Such a table can be easily incorporated into a digital camera so that a photographer can select his/her desired dynamic range and camera response from a menu. In other words, a camera with fixed response and dynamic range can be turned into one that has a “flexible” dynamic range. We show several experimental results using images of real scenes that demonstrate the power of this notion of flexible dynamic range. 2 The Effective Camera When we take multiple exposures of the same scene, each exposure adds new information about the radiance values in the scene. In this section, we create an effective camera by constructing a single image which retains all the information from the individual exposures. By information we mean image brightness values which represent measurements of scene radiance. Scene radiance is proportional to image irradiance E [7]. In a digital camera, the camera response function f jumps from one image brightness value B to the next at a list of positive irradiance values (shown below the graph in Fig. 2) which we call the measured irradiance levels. An image brightness value indicates that the corresponding measured irradiance lies in the interval between two of these levels. Hence, without loss of generality, we define B as the index of the first of these two levels, EB , so that f(EB) = B. Hence, the response function is equivalent to the list of measured irradiance levels. Now, consider the measured irradiance levels using unit exposure e1 = 1 with a real non-linear camera having 4 brightness levels. These levels are shown on the bar at the bottom of Fig. 3(a). The irradiance levels for a second exposure scale by 1/e2, as shown in Fig. 3(b). We combine the measured irradiance levels from the first and the second exposures by taking the union of all the 1The value we call exposure accounts for all the attenuations of light by the optics. One can change the exposure by changing a filter on the lens, the aperture size, the integration time, or the gain. 2Note that the slope of the response function determines the density of the levels, as shown by the short line segment in Fig. 2. 3Note that the number of exposures and brightness levels are for illustration only. Our arguments hold in general.

125 citations


Proceedings ArticleDOI
25 Jun 2003
TL;DR: An algorithm for determining quadrature rules for computing the direct illumination of predominantly diffuse objects by high dynamic range images is presented, which precisely reproduces fine shadow detail, is much more efficient as compared to Monte Carlo integration, and does not require any manual intervention.
Abstract: We present an algorithm for determining quadrature rules for computing the direct illumination of predominantly diffuse objects by high dynamic range images. The new method precisely reproduces fine shadow detail, is much more efficient as compared to Monte Carlo integration, and does not require any manual intervention.

Patent
14 Nov 2003
TL;DR: In this paper, a background image constructed from HDR image information is displayed along with portions of the HDR image corresponding to one or more regions of interest, and an intermediate image or a derived image is then displayed.
Abstract: Techniques and tools for displaying/viewing HDR images are described. In one aspect, a background image constructed from HDR image information is displayed along with portions of the HDR image corresponding to one or more regions of interest. The portions have at least one display parameter (e.g., a tone mapping parameter) that differs from a corresponding display parameter for the background image. Regions of interest and display parameters can be determined by a user (e.g., via a GUI). In another aspect, an intermediate image is determined based on image data corresponding to one or more regions of interest of the HDR image. The intermediate image has a narrower dynamic range than the HDR image. The intermediate image or a derived image is then displayed. The techniques and tools can be used to compare, for example, different tone mappings, compression methods, or color spaces in the background and regions of interest.

Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm for synthesizing a high dynamic range, motion blur free, still image from multiple captures, which consists of two main procedures, photocurrent estimation and saturation and motion detection.
Abstract: Advances in CMOS image sensors enable high-speed image readout, which makes it possible to capture multiple images within a normal exposure time. Earlier work has demonstrated the use of this capability to enhance sensor dynamic range. This paper presents an algorithm for synthesizing a high dynamic range, motion blur free, still image from multiple captures. The algorithm consists of two main procedures, photocurrent estimation and saturation and motion detection. Estimation is used to reduce read noise, and, thus, to enhance dynamic range at the low illumination end. Saturation detection is used to enhance dynamic range at the high illumination end as previously proposed, while motion blur detection ensures that the estimation is not corrupted by motion. Motion blur detection also makes it possible to extend exposure time and to capture more images, which can be used to further enhance dynamic range at the low illumination end. Our algorithm operates completely locally; each pixel's final value is computed using only its captured values, and recursively, requiring the storage of only a constant number of values per pixel independent of the number of images captured. Simulation and experimental results demonstrate the enhanced signal-to-noise ratio (SNR), dynamic range, and the motion blur prevention achieved using the algorithm.

Proceedings Article
01 Jan 2003
TL;DR: This paper describes the use of an image appearance model, iCAM, to render high dynamic range images for display, and describes specific implementation details for using that framework torender high dynamicrange images.
Abstract: Color imaging systems are continuously improving, and have now improved to the point of capturing high dynamic range scenes. Unfortunately most commercially available color display devices, such as CRTs and LCDs, are limited in their dynamic range. It is necessary to tone-map, or render, the high dynamic range images in order to display them onto a lower dynamic range device. This paper describes the use of an image appearance model, iCAM, to render high dynamic range images for display. Image appearance models have greater flexibility over dedicated tone-scaling algorithms as they are designed to predict how images perceptually appear, and not designed for the singular purpose of rendering. In this paper we discuss the use of an image appearance framework, and describe specific implementation details for using that framework to render high dynamic range images.

Journal ArticleDOI
TL;DR: This paper derives the optimal vignetting configuration and implements it using an external filter with spatially varying transmittance, and derives efficient scene sampling conditions as well as ways to self calibrate the vignet effects.
Abstract: We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each scene point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each scene point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.

Journal ArticleDOI
TL;DR: Images based on the quadratic components are shown to provide contrast enhancement between tissue and ultrasound contrast agents (UCAs) without loss in spatial resolution and to preserve the low scattering regions when compared with standard B-mode or harmonic images.
Abstract: We present a new algorithm for deriving a second-order Volterra filter (SVF) capable of separating linear and quadratic components from echo signals. Images based on the quadratic components are shown to provide contrast enhancement between tissue and ultrasound contrast agents (UCAs) without loss in spatial resolution. It is also shown that the quadratic images preserve the low scattering regions due to their high dynamic range when compared with standard B-mode or harmonic images. A robust algorithm for deriving the filter has been developed and tested on real-time imaging data from contrast and tissue-mimicking media. Illustrative examples from image targets containing contrast agent and tissue-mimicking media are presented and discussed. Quantitative assessment of the contrast enhancement is performed on both the RF data and the envelope-detected log-compressed image data. It is shown that the quadratic images offer levels of enhancement comparable or exceeding those from harmonic filters while maintaining the visibility of low scattering regions of the image.

Patent
09 Jul 2003
TL;DR: In this article, a vehicular vision system comprising a high dynamic range (HDR range) was proposed for rear vision, collision avoidance, obstacle detection, adaptive cruise control, rain sensing, exterior light control, and lane departure warning.
Abstract: A vehicular vision system is disclosed (Fig. 1) comprising a high dynamic range (101, 102). The systems and methods are advantages for rear vision, collision avoidance, obstacle detection, adaptive cruise control, rain sensing (100), exterior light control (110), and lane departure warning, as well as other applications where a given scene may comprise objects having widely varying brightness values (120).

Journal ArticleDOI
01 Jan 2003
TL;DR: In this paper, the authors describe the latest developments in high-resolution cross-strip (XS) anode readout for photon counting imaging with microchannel plates (MCPs).
Abstract: The combination of a photocathode, microchannel plate stack and photon counting, imaging readout provides a powerful tool for high dynamic range, high spatial resolution, and high timing accuracy detectors for the X-ray to visible light spectral range. Significant improvements in spatial resolution, quantum efficiency, and background rate have made these devices attractive for many applications and are being applied to completely new research areas. We describe the latest developments in high-resolution cross-strip (XS) anode readout for photon counting imaging with microchannel plates (MCPs). We show that the spatial resolution of an MCP detector with the cross-strip readout is now limited only by the MCP pore width. For the first time, we show images of resolution test masks illustrating the exceptional spatial resolution of the cross-strip readout, which at the present time can resolve features on the scale of /spl sim/7 /spl mu/m with MCP gains as low as 6/spl times/10/sup 5/. Low-gain operation of the detector with high spatial resolution shown in this paper is very beneficial for applications with high local counting rate and long lifetime requirements. The spatial resolution of the XS anode can be increased even further with improved anode uniformity, lower noise front-end electronics, and new centroiding algorithms. This could be important when MCPs with smaller than existing 6 /spl mu/m pores become commercially available, thus improving the detector resolution down to the few micrometer scale.

Journal ArticleDOI
TL;DR: A novel method for computing local adaptation luminance that can be used with several different visual adaptation-based tone-reproduction operators for displaying visually accurate high-dynamic range images.
Abstract: Realistic display of high-dynamic range images is a difficult problem. Previous methods for high-dynamic range image display suffer from halo artifacts or are computationally expensive. We present a novel method for computing local adaptation luminance that can be used with several different visual adaptation-based tone-reproduction operators for displaying visually accurate high-dynamic range images. The method uses fast image segmentation, grouping, and graph operations to generate local adaptation luminance. Results on several images show excellent dynamic range compression, while preserving detail without the presence of halo artifacts. With adaptive assimilation, the method can be configured to bring out a high-dynamic range appearance in the display image. The method is efficient in terms of processor and memory use.

Proceedings ArticleDOI
11 Feb 2003
TL;DR: In this paper, a high dynamic range viewer based on the 120-degree field-of-view LEEP (Large Expanse Extra Perspective) stereo optics used in the original NASA virtual reality systems is presented.
Abstract: In this paper we present a High Dynamic Range viewer based on the 120-degree field-of-view LEEP (Large Expanse Extra Perspective) stereo optics used in the original NASA virtual reality systems. By combining these optics with an intense backlighting system (20 Kcd/m2) and layered transparencies, we are able to reproduce the absolute luminance levels and full dynamic range of almost any visual environment. This is important because it allows us to display environments with luminance levels that would not be displayable on a standard monitor. This technology may enable researchers to conduct controlled experiments in visual contrast, chromatic adaptation, and disability and discomfort glare without the usual limitations of dynamic range and field of view imposed by conventional CRT display systems. In this paper, we describe the basic system and techniques used to produce the transparency layers from a high dynamic range rendering or scene capture. We further present a series of psychophysical experiments demonstrating the device's ability to reproduce visual percepts, and compare this result to the real scene and a visibility matching tone reproduction operator presented on a conventional CRT display.

Journal ArticleDOI
TL;DR: A novel paradigm for information visualization in high dynamic range images is presented, aiming to produce a minimal set of images capturing the information all over the high dynamicrange data, while at the same time preserving a natural appearance for each one of the images in the set.
Abstract: A novel paradigm for information visualization in high dynamic range images is presented in this paper. These images, real or synthetic, have luminance with typical ranges many orders of magnitude higher than that of standard output/viewing devices, thereby requiring some processing for their visualization. In contrast with existent approaches, which compute a single image with reduced range, close in a given sense to the original data, we propose to look for a representative set of images. The goal is then to produce a minimal set of images capturing the information all over the high dynamic range data, while at the same time preserving a natural appearance for each one of the images in the set. A specific algorithm that achieves this goal is presented and tested on natural and synthetic data.

Proceedings ArticleDOI
09 Feb 2003
TL;DR: The CMOS imaging system-on-chip includes an embedded frame buffer and operates at 100 MHz and produces color video at up to 500 frames/s with over 100 dB dynamic range using multi-capture.
Abstract: The CMOS imaging system-on-chip includes an embedded frame buffer and operates at 100 MHz. The programmable chip produces color video at up to 500 frames/s with over 100 dB dynamic range using multi-capture. The sensor utilizes a 0.18 /spl mu/m 1 P 4 M CMOS process and dissipates 600 mW including I/O.

Proceedings ArticleDOI
David Wisell1
20 May 2003
TL;DR: A measurement system for dynamic characterization of power amplifiers that uses data that are generated and collected at baseband to accurately calculate AM/AM and AM/PM distortion as well as memory effects in the amplifiers is described.
Abstract: This paper describes, and discusses in some detail, a measurement system for dynamic characterization of power amplifiers. The system uses data that are generated and collected at baseband to accurately calculate AM/AM and AM/PM distortion as well as memory effects in the amplifiers. The limitations of the system in terms of dynamic range and bandwidth are discussed as well as techniques to overcome them. The system may serve as a tool both for designers of power amplifiers as well for development of amplifier models and systems for predistortion.

Proceedings ArticleDOI
17 Jun 2003
TL;DR: Changes made to Retinex algorithm for processing high dynamic range images are presented, and a further integration of the RetineX with specialized tone mapping algorithms that enables the production of images that appear as similar as possible to the viewer's perception of actual scenes are presented.
Abstract: A tone mapping algorithm for displaying high contrast scenes was designed on the basis of the results of experimental tests using human subjects. Systematic perceptual evaluation of several existing tone mapping techniques revealed that the most "natural" appearance was determined by the presence in the output image of detailed scenery features often made visible by limiting contrast and by properly reproducing brightness. Taking these results into account, we developed a system to produce images close to the ideal preference point for high dynamic range input image data. Of the algorithms that we tested, only the Retinex algorithm was capable of retrieving detailed scene features hidden in high luminance areas while still preserving a good contrast level. This paper presents changes made to Retinex algorithm for processing high dynamic range images, and a further integration of the Retinex with specialized tone mapping algorithms that enables the production of images that appear as similar as possible to the viewer's perception of actual scenes.

Patent
18 Mar 2003
TL;DR: In this article, a high speed analog to digital converter (ADC) is coupled to a detector (14) and a processor (18) to generate an analog signal in response to the detection of a trace sample, such as an ionized molecule or a beam of light.
Abstract: A high speed analog to digital converter ('ADC') (12) that can be used in a detector system (10). The ADC is coupled to a detector (14) and a processor (18). The detector (14) generates an analog signal in response to the detection of a trace sample, such as an ionized molecule or a beam of light. The processor (18) determines a baseline value and threshold value. Portions of the analog signal at or below the threshold are assigned the baseline value. The threshold typically corresponds to a value above the noise level in the system. The detector (14) thus removes undesirable noise from the readout value. The process can compensate for factors such as DC drift while providing accurate data regarding detection of the trace sample.

Patent
13 Jan 2003
TL;DR: An optical coupling assembly having an optical receiver that exhibits extended dynamic range, and, more particularly, an optical receivers that is integrated with a Variable Optical Attenuator (VOA) to extend the dynamic range of the receiver is presented in this article.
Abstract: An optical coupling assembly having an optical receiver that exhibits extended dynamic range, and, more particularly, an optical receiver that is integrated with a Variable Optical Attenuator (VOA) to extend the dynamic range of the receiver.


Patent
10 Feb 2003
TL;DR: In this article, an optical signal receiver with an increased dynamic range for detecting optical signals whose intensity varies over a wide range is presented. But the circuit is not designed to detect optical signals over a broad range, since the current gain of the avalanche photo-diode is a function of the reverse bias voltage.
Abstract: An optical signal receiver has an increased dynamic range for detecting optical signals whose intensity varies over a wide range. In one embodiment, the optical signal receiver includes a circuit operable to provide a reverse bias voltage and an avalanche photo-diode (APD) coupled to the circuit to receive the reverse bias voltage. The circuit is operable to lower the reverse bias voltage in response to an increase in power of the received optical signals. Since the current gain of the APD is a function of the reverse bias voltage, the circuit indirectly lowers the current gain of the APD in response to the increase in power of the received optical signals. As a result, the optical signal receiver can be used to detect optical signals whose intensity varies over a broad range.

Journal ArticleDOI
S. Mohr1, Thomas Bosselmann1
22 Apr 2003
TL;DR: In this paper, a combination of analog and digital signal processing algorithms was used to achieve a dynamic range from 0.05 to 100 times the rated current and rms error was less than 0.5 %.
Abstract: In the power industry, current must be measured for metering and protection purposes. For such measurements, the use of magnetooptic current transformers offers many advantages. Conventional sensors, however, need two magnetooptic transformers to realize the required high dynamic range. Using a combination of analog and digital signal processing algorithms, the inherent low noise of the photonic input signals is maintained. The system design is based on a rigorous investigation and optimization of the error sources in the signal processing chain. The validity of this approach was confirmed by a demonstrator which achieved a dynamic range from 0.05 to 100 times the rated current and whose rms error was less than 0.5 %.

Proceedings ArticleDOI
01 Jan 2003
TL;DR: In this paper, the design and performance of a compact DC-40 GHz variable attenuator MMIC with triple-gate FETs is reported. And the MMIC is used as well in a variable gain amplifier (VGA) specifically designed for Ka-band LMDS and VSAT radios.
Abstract: The design and performance of a compact DC-40 GHz variable attenuator MMIC are reported in this paper Using our standard 4-inch 025-/spl mu/m GaAs power PHEMT technology, this T-type attenuator exhibits more than 30-dB dynamic range, with a nominal insertion loss of 4 dB over the DC-40 GHz band By using triple-gate FETs, typical input power compression of more than 10 to 20 dBm is achieved with a die area of only 1 mm/sup 2/ (09/spl times/112 mm/sup 2/) and better overall performance This MMIC is 30% smaller than any previously reported analog attenuators operating in the DC-40 GHz frequency range This attenuator is used as well in a variable gain amplifier (VGA), specifically designed for Ka-band LMDS and VSAT radios From 24 to 32 GHz, the VGA MMIC demonstrates a maximum gain of 32 dB, with more than 35-dB dynamic range and 24-dBm output power at 1-dB gain compression