scispace - formally typeset
Search or ask a question
Topic

High dynamic range

About: High dynamic range is a research topic. Over the lifetime, 4280 publications have been published within this topic receiving 76293 citations. The topic is also known as: HDR.


Papers
More filters
Patent
31 Jul 2007
TL;DR: In this article, a data structure defining a high dynamic range image consisting of a tone map having a reduced dynamic range and HDR information is defined, which can be reconstructed from the tone map and the HDR information.
Abstract: A data structure defining a high dynamic range image comprises a tone map having a reduced dynamic range and HDR information. The high dynamic range image can be reconstructed from the tone map and the HDR information. The data structure can be backwards compatible with legacy hardware or software viewers. The data structure may comprise a JFIF file having the tone map encoded as a JPEG image with the HDR information in an application extension or comment field of the JFIF file, or a MPEG file having the tone map encoded as a MPEG image with the HDR information in a video or audio channel of the MPEG file. Apparatus and methods for encoding or decoding the data structure may apply pre- or post correction to compensate for lossy encoding of the high dynamic range information.

97 citations

Journal ArticleDOI
TL;DR: This paper presents an efficient and accurate multiple exposure fusion technique for the HDRI acquisition, which simultaneously estimates displacements and occlusion and saturation regions by using maximum a posteriori estimation and constructs motion-blur-free HDRIs.
Abstract: A multiple exposure fusion to enhance the dynamic range of an image is proposed The construction of high dynamic range images (HDRIs) is performed by combining multiple images taken with different exposures and estimating the irradiance value for each pixel This is a common process for HDRI acquisition During this process, displacements of the images caused by object movements often yield motion blur and ghosting artifacts To address the problem, this paper presents an efficient and accurate multiple exposure fusion technique for the HDRI acquisition Our method simultaneously estimates displacements and occlusion and saturation regions by using maximum a posteriori estimation and constructs motion-blur-free HDRIs We also propose a new weighting scheme for the multiple image fusion We demonstrate that our HDRI acquisition algorithm is accurate, even for images with large motion

96 citations

Journal ArticleDOI
TL;DR: Flexibility in the design is proposed by applying different voltages to the different liquid crystal retarders thus compensating for small thickness deviations from the nominal values and obtaining the high dynamic range.
Abstract: Extension of the dynamic range of liquid crystal tunable Lyot filter is demonstrated by incorporating with it a liquid crystal variable retarder as an eliminator for the third and fourth order peaks. The filter is continuously tunable in the range 500 nm to 900 nm with a nominal width in the range 50nm-100nm. Design procedure is described including the exact solution to the LC director profile and the suitability for biomedical optical imaging applications. Flexibility in the design is proposed by applying different voltages to the different liquid crystal retarders thus compensating for small thickness deviations from the nominal values and obtaining the high dynamic range.

94 citations

Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm for synthesizing a high dynamic range, motion blur free, still image from multiple captures, which consists of two main procedures, photocurrent estimation and saturation and motion detection.
Abstract: Advances in CMOS image sensors enable high-speed image readout, which makes it possible to capture multiple images within a normal exposure time. Earlier work has demonstrated the use of this capability to enhance sensor dynamic range. This paper presents an algorithm for synthesizing a high dynamic range, motion blur free, still image from multiple captures. The algorithm consists of two main procedures, photocurrent estimation and saturation and motion detection. Estimation is used to reduce read noise, and, thus, to enhance dynamic range at the low illumination end. Saturation detection is used to enhance dynamic range at the high illumination end as previously proposed, while motion blur detection ensures that the estimation is not corrupted by motion. Motion blur detection also makes it possible to extend exposure time and to capture more images, which can be used to further enhance dynamic range at the low illumination end. Our algorithm operates completely locally; each pixel's final value is computed using only its captured values, and recursively, requiring the storage of only a constant number of values per pixel independent of the number of images captured. Simulation and experimental results demonstrate the enhanced signal-to-noise ratio (SNR), dynamic range, and the motion blur prevention achieved using the algorithm.

93 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a novel and robust approach for respiration tracking which compensates for the negative effects of variations in the ambient temperature and motion artifacts and can accurately extract breathing rates in highly dynamic thermal scenes.
Abstract: The ability to monitor the respiratory rate, one of the vital signs, is extremely important for the medical treatment, healthcare and fitness sectors. In many situations, mobile methods, which allow users to undertake everyday activities, are required. However, current monitoring systems can be obtrusive, requiring users to wear respiration belts or nasal probes. Alternatively, contactless digital image sensor based remote-photoplethysmography (PPG) can be used. However, remote PPG requires an ambient source of light, and does not work properly in dark places or under varying lighting conditions. Recent advances in thermographic systems have shrunk their size, weight and cost, to the point where it is possible to create smart-phone based respiration rate monitoring devices that are not affected by lighting conditions. However, mobile thermal imaging is challenged in scenes with high thermal dynamic ranges (e.g. due to the different environmental temperature distributions indoors and outdoors). This challenge is further amplified by general problems such as motion artifacts and low spatial resolution, leading to unreliable breathing signals. In this paper, we propose a novel and robust approach for respiration tracking which compensates for the negative effects of variations in the ambient temperature and motion artifacts and can accurately extract breathing rates in highly dynamic thermal scenes. The approach is based on tracking the nostril of the user and using local temperature variations to infer inhalation and exhalation cycles. It has three main contributions. The first is a novel Optimal Quantization technique which adaptively constructs a color mapping of absolute temperature to improve segmentation, classification and tracking. The second is the Thermal Gradient Flow method that computes thermal gradient magnitude maps to enhance the accuracy of the nostril region tracking. Finally, we introduce the Thermal Voxel method to increase the reliability of the captured respiration signals compared to the traditional averaging method. We demonstrate the extreme robustness of our system to track the nostril-region and measure the respiratory rate by evaluating it during controlled respiration exercises in high thermal dynamic scenes (e.g. strong correlation (r = 0.9987) with the ground truth from the respiration-belt sensor). We also demonstrate how our algorithm outperformed standard algorithms in settings with different amounts of environmental thermal changes and human motion. We open the tracked ROI sequences of the datasets collected for these studies (i.e. under both controlled and unconstrained real-world settings) to the community to foster work in this area.

93 citations


Network Information
Related Topics (5)
Pixel
136.5K papers, 1.5M citations
88% related
Image processing
229.9K papers, 3.5M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
83% related
Image segmentation
79.6K papers, 1.8M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023122
2022263
2021164
2020243
2019238
2018262