scispace - formally typeset
Search or ask a question

Showing papers on "Contrast (vision) published in 2021"


Journal ArticleDOI
TL;DR: FovVideoVDP as mentioned in this paper is a video difference metric that models the spatial, temporal, and peripheral aspects of perception, which is derived from psychophysical studies of the early visual system, which model spatio-temporal contrast sensitivity, cortical magnification and contrast masking.
Abstract: FovVideoVDP is a video difference metric that models the spatial, temporal, and peripheral aspects of perception. While many other metrics are available, our work provides the first practical treatment of these three central aspects of vision simultaneously. The complex interplay between spatial and temporal sensitivity across retinal locations is especially important for displays that cover a large field-of-view, such as Virtual and Augmented Reality displays, and associated methods, such as foveated rendering. Our metric is derived from psychophysical studies of the early visual system, which model spatio-temporal contrast sensitivity, cortical magnification and contrast masking. It accounts for physical specification of the display (luminance, size, resolution) and viewing distance. To validate the metric, we collected a novel foveated rendering dataset which captures quality degradation due to sampling and reconstruction. To demonstrate our algorithm's generality, we test it on 3 independent foveated video datasets, and on a large image quality dataset, achieving the best performance across all datasets when compared to the state-of-the-art.

61 citations


Journal ArticleDOI
TL;DR: In this paper, the authors measured visual acuity at isoeccentric peripheral locations (10 deg eccentricity), every 15° of polar angle, on each trial, observers judged the orientation (± 45°) of one of four equidistant, suprathreshold grating stimuli varying in spatial frequency (SF).
Abstract: Human vision is heterogeneous around the visual field. At a fixed eccentricity, performance is better along the horizontal than the vertical meridian and along the lower than the upper vertical meridian. These asymmetric patterns, termed performance fields, have been found in numerous visual tasks, including those mediated by contrast sensitivity and spatial resolution. However, it is unknown whether spatial resolution asymmetries are confined to the cardinal meridians or whether and how far they extend into the upper and lower hemifields. Here, we measured visual acuity at isoeccentric peripheral locations (10 deg eccentricity), every 15° of polar angle. On each trial, observers judged the orientation (± 45°) of one of four equidistant, suprathreshold grating stimuli varying in spatial frequency (SF). On each block, we measured performance as a function of stimulus SF at 4 of 24 isoeccentric locations. We estimated the 75%-correct SF threshold, SF cutoff point (i.e., chance-level), and slope of the psychometric function for each location. We found higher SF estimates (i.e., better acuity) for the horizontal than the vertical meridian and for the lower than the upper vertical meridian. These asymmetries were most pronounced at the cardinal meridians and decreased gradually as the angular distance from the vertical meridian increased. This gradual change in acuity with polar angle reflected a shift of the psychometric function without changes in slope. The same pattern was found under binocular and monocular viewing conditions. These findings advance our understanding of visual processing around the visual field and help constrain models of visual perception.

48 citations


Journal ArticleDOI
TL;DR: This work designs the dedicated fractions to compensate the lower color channels and an adaptive contrast enhancement algorithm is applied to each color channel to produce the background-stretched and foreground-stretched images, which significantly improves the contrast of the output image.

46 citations


Journal Article
TL;DR: An in-depth analysis of the proposed C2D framework is performed, including investigating the performance of different pre-training approaches and estimating the effective upper bound of the LNL performance with semi-supervised learning.
Abstract: Advances in semi-supervised methods for image classification significantly boosted performance in the learning with noisy labels (LNL) task. Specifically, by discarding the erroneous labels (and keeping the samples), the LNL task becomes a semi-supervised one for which powerful tools exist. Identifying the noisy samples, however, heavily relies on the success of a warm-up stage where standard supervised training is performed using the full (noisy) training set. This stage is sensitive not only to the noise level but also to the choice of hyperparameters. In this paper, we propose to solve this problem by utilizing self-supervised pre-training. Our approach, which we name Contrast to Divide, offers several important advantages. First, by removing the labels altogether, our pre-trained features become agnostic to the labels' amount of noise, allowing accurate noisy separation even under high noise levels. Second, as recently shown, semi-supervised methods significantly benefit from self-supervised pre-training. Moreover, compared with standard pre-training approaches (e.g., supervised training on ImageNet), self-supervised pre-training does not suffer from a domain gap. We demonstrate the effectiveness of the proposed method in various settings with both synthetic and real noise. Our results indicate that Contrast to Divide brings a new state-of-the-art by a significant margin to both CIFAR-10 and CIFAR-100. For example, in the high-noise regime of 90%, we get a boost of more than 27% for CIFAR-100 and more than 17% for CIFAR-10 over the previous state-of-the-art. Moreover, we achieve comparable performance on Clothing-1M without using ImageNet pre-training. Code for reproducing our experiments is available at https://github.com/ContrastToDivide/C2D

44 citations




Journal ArticleDOI
Abstract: For sequence classification, an important issue is to find discriminative features, where sequential pattern mining (SPM) is often used to find frequent patterns from sequences as features. To improve classification accuracy and pattern interpretability, contrast pattern mining emerges to discover patterns with high-contrast rates between different categories. To date, existing contrast SPM methods face many challenges, including excessive parameter selection and inefficient occurrences counting. To tackle these issues, this article proposes a top-k self-adaptive contrast SPM, which adaptively adjusts the gap constraints to find top-k self-adaptive contrast patterns (SCPs) from positive and negative sequences. One of the key tasks of the mining problem is to calculate the support (the number of occurrences) of a pattern in each sequence. To support efficient counting, we store all occurrences of a pattern in a special array in a Nettree, an extended tree structure with multiple roots and multiple parents. We employ the array to calculate the occurrences of all its superpatterns with one-way scanning to avoid redundant calculation. Meanwhile, because the contrast SPM problem does not satisfy the Apriori property, we propose Zero and Less strategies to prune candidate patterns and a Contrast-first mining strategy to select patterns with the highest contrast rate as the prefix subpattern and calculate the contrast rate of all its superpatterns. Experiments validate the efficiency of the proposed algorithm and show that contrast patterns significantly outperform frequent patterns for sequence classification. The algorithms and datasets can be downloaded from https://github.com/wuc567/Pattern-Mining/tree/master/SCP-Miner.

26 citations


Journal ArticleDOI
Zhaobo Qi1, Shuhui Wang1, Chi Su, Li Su1, Qingming Huang1, Qi Tian2 
TL;DR: Li et al. as discussed by the authors proposed a self-regulated learning framework to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp in contrast to previously observed content, and reflects its correlation with previously observed frames.
Abstract: Future activity anticipation is a challenging problem in egocentric vision. As a standard future activity anticipation paradigm, recursive sequence prediction suffers from the accumulation of errors. To address this problem, we propose a simple and effective Self-Regulated Learning framework, which aims to regulate the intermediate representation consecutively to produce representation that (a) emphasizes the novel information in the frame of the current time-stamp in contrast to previously observed content, and (b) reflects its correlation with previously observed frames. The former is achieved by minimizing a contrastive loss, and the latter can be achieved by a dynamic reweighing mechanism to attend to informative frames in the observed content with a similarity comparison between feature of the current frame and observed frames. The learned final video representation can be further enhanced by multi-task learning which performs joint feature learning on the target activity labels and the automatically detected action and object class tokens. SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets. Its effectiveness is also verified by the experimental fact that the action and object concepts that support the activity semantics can be accurately identified.

25 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate the effectiveness of the proposed SCENS on contrast enhancement and noise mitigation, and both subjective and objective comparisons with state-of-the-art algorithms indicate the superiority of this proposed method.
Abstract: Imaging in low-light conditions often suffers from degradations, such as low visibility, low contrast, and noticeable noise, which significantly reduces the performance of various vision-based applications. While various methods are proposed to enhance image contrast, inevitable noise is also amplified notably. Consequently, it is highly desired to take both contrast enhancement and noise suppression into consideration simultaneously. In this article, we propose a novel and unified framework SCENS to simultaneously enhance contrast and suppress noise for low-light images. An observed low-light image is decomposed into illumination, reflectance, and noise components. More specifically, the illumination is estimated using the second-order total generalized variation to preserve the spatial smoothness and the overall structure. In contrast, the piecewise continuity and fine detail of reflectance are maintained by minimizing the residual of gradients between the reflectance and the scene. Experimental results demonstrate the effectiveness of the proposed SCENS on contrast enhancement and noise mitigation. In addition, both subjective and objective comparisons with state-of-the-art algorithms indicate the superiority of the proposed method.

22 citations


Journal ArticleDOI
Zhangfan Shen1, Linghao Zhang1, Rui Li1, Jie Hou1, Chenguan Liu1, Weizhuan Hu1 
01 Apr 2021-Displays
TL;DR: The results showed that although there was no significant main effect of luminance contrast on the icon search accuracy, participants responded more quickly to medium Luminance contrast than low or high luminance Contrast, and the medium or low area ratio was more conducive to the participants identifying icons.

22 citations



Journal ArticleDOI
TL;DR: The visual analog method led to large differences with the other methods under study, but not as large as expected in the digital image with the cross-polarization filter and the spectrophotometer.
Abstract: Statement of problem During the selection of tooth color, subjective communication with the laboratory and an incorrect color registration technique can lead to a poor color match of a restoration to adjacent teeth and oral structures. Purpose The purpose of this cross-sectional study was to compare color registration and color matching in a young Chilean population with 3 different methods: visual with a shade guide, digital visual with a cross-polarized filter, and instrumental with a spectrophotometer. Material and methods A total of 60 young volunteers were selected for tooth color registration of the maxillary right central incisor by using 3 different methods. Tooth color registration was performed using the CIELab and ΔE coordinate system. Results Significant differences were detected between the coordinates recorded by the visual analog method in comparison with the other 2 methods. In contrast, no significant differences were found between the L∗ and b∗ coordinates of the spectrophotometer and the digital visual method with use of a cross-polarization filter. The ΔE obtained between the visual shade and spectrophotometer was 7.35, and the ΔE between the digital visual method with the use of a cross-polarization filter and the spectrophotometer was 6.12. Conclusions No statistically significant differences were observed in the digital image with the cross-polarization filter and the spectrophotometer in the L∗ and b∗ coordinate of the CIELab system. In contrast, the visual analog method led to large differences with the other methods under study. The ΔE of the digital visual method with the use of cross-polarization filters and the spectrophotometer was 6.2, considered as an acceptable color mismatch (

Journal ArticleDOI
TL;DR: This work presents a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image, and shows that the concept of "social distancing" can be rethink as a form of social interaction in contrast to a simple location-based rule.
Abstract: Perceiving humans in the context of Intelligent Transportation Systems (ITS) often relies on multiple cameras or expensive LiDAR sensors. In this work, we present a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image. We address the challenges related to the ill-posed monocular 3D tasks by proposing a neural network architecture that predicts confidence intervals in contrast to point estimates. Our neural network estimates human 3D body locations and their orientation with a measure of uncertainty. Our proposed solution (i) is privacy-safe, (ii) works with any fixed or moving cameras, and (iii) does not rely on ground plane estimation. We demonstrate the performance of our method with respect to three applications: locating humans in 3D, detecting social interactions, and verifying the compliance of recent safety measures due to the COVID-19 outbreak. We show that it is possible to rethink the concept of ``social distancing'' as a form of social interaction in contrast to a simple location-based rule. We publicly share the source code towards an open science mission.

Journal ArticleDOI
TL;DR: In this model, a weighted fidelity term is employed to fuse both the infrared objects in the infrared image and the salient scenes in the visible image and a weight estimation method is developed based on the global luminance contrast-based saliency.
Abstract: A single infrared image or visible image for the same scene is usually insufficient to simultaneously reveal the infrared objects and the scene details. Thus, image fusion techniques play an important role in producing a single image from the images captured by infrared and visible sensors. In this paper, we propose a novel total variation (TV)-based fusion for infrared and visible images. In our model, a weighted fidelity term is employed to fuse both the infrared objects in the infrared image and the salient scenes in the visible image. To this end, a weight estimation method is developed based on the global luminance contrast-based saliency. Also, to overcome the over-fitting, two constraints are further introduced to merge more details from the visible image and prevent the luminance degradation for the fused result, respectively. Moreover, joint norms are exploited to produce a better result. $L_{2,1,rc}$ provides the structural group sparseness for the fidelity term, whereas $L_{1/2}$ presents the better gradient sparse for the detail preserving term and $L_{2}$ is utilized for the luminance degradation preventing term. Experimental results indicate that the proposed method can give state-of-the-art performances both in visual perception and quantitative scores than other methods.

Journal ArticleDOI
TL;DR: The proposed adaptive chaotic particle swarm optimization algorithm combined with gamma correction to improve the overall brightness of the image, and generate the best brightness adjustment effect in the proposed algorithm significantly enhances the visual effect of the low illumination color images.
Abstract: In order to effectively improve the visual effect and image quality of color images under low illumination conditions, we propose an image enhancement method based on HSV and CIEL*a*b* color spaces for adaptively enhancing color image under low illumination conditions. The proposed method takes into account the characteristics of low illumination color images, and has the strategies of contrast, brightness enhancement, and color saturation correction. We utilize our proposed adaptive chaotic particle swarm optimization algorithm in this paper combined with gamma correction to improve the overall brightness of the image, and generate the best brightness adjustment effect in the proposed algorithm. In addition, our improved adaptive stretching function is used to enhance the image saturation. The experimental results show that compared with other traditional and latest color image enhancement algorithms, the proposed algorithm significantly enhances the visual effect of the low illumination color images. It can not only improve the contrast of low illumination color images and avoid color distortion, but also effectively improve the brightness of the image and provide more detail enhancement while maintaining the naturalness of the image.

Journal ArticleDOI
TL;DR: The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model and can effectively solve color distortion, low contrast, and unobvious details of underwater images.
Abstract: In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.

Journal ArticleDOI
TL;DR: The nature of investing in the Bitcoin market evolved from trend-following to excessive momentum and sentiment in the most recent time period, and the random forest model is identified as the most accurate at predicting Bitcoin.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method significantly improves overall brightness, increases contrast details in shadow areas, and strengthens identification of corrosion areas in the image.
Abstract: In this paper, an image enhancement algorithm is presented for identification of corrosion areas and dealing with low contrast present in shadow areas of an image. This algorithm uses histogram equalization processing under the hue-saturation-intensity model. First of all, an etched image is transformed from red-green-blue color space to hue-saturation-intensity color space, and only the luminance component is enhanced. Then, part of the enhanced image is combined with the original tone component, followed by saturation and conversion to red-green-blue color space to obtain the enhanced corrosion image. Experimental results show that the proposed method significantly improves overall brightness, increases contrast details in shadow areas, and strengthens identification of corrosion areas in the image.

Journal ArticleDOI
TL;DR: In this paper, an active learning approach was used to measure the contrast sensitivity function (CSF) in patients with various degrees of dry age-related macular degeneration (AMD) under multiple luminance conditions.

Journal ArticleDOI
Xiaojiao Xie1, Fanghao Song1, Yan Liu1, Shurui Wang1, Dong Yu1 
TL;DR: In this paper, the conjoint effects of color mode and luminance contrast on visual fatigue and subjective preference when using electronic devices under low screen luminance and low ambient illumination at night were investigated.
Abstract: Using electronic devices at night can easily cause visual fatigue. We investigated the conjoint effects of color mode and luminance contrast on visual fatigue and subjective preference when using electronic devices under low screen luminance and low ambient illumination at night. A multidimensional approach based on eye and subjective measures was used to test 2 color modes (dark mode, light mode) and 6 luminance contrast ratios (0.969, 0.935, 0.868, 0.855, 0.725, 0.469) in a $2 \times 6$ experimental design. We used eye movement tracking technology to collect blink rate and pupil diameters, and used the Likert scale to measure subjective visual fatigue scale and preference. Results showed that reading in the dark mode reduced visual fatigue, as reflected by an increase in blink rate and pupil accommodation. Lower subjective visual fatigue scale and higher preference were found in the light mode due to subjects’ using habits of dark texts on a light background. There was a significant negative correlation between (text-background) luminance contrast and visual fatigue, and subjects preferred higher luminance contrast. We observed the lowest visual fatigue under the luminance contrast of 0.969 in the dark mode, and the lowest subjective preference when the luminance contrast was lower than 0.725. We suggest the users should choose the dark mode to reduce visual fatigue when using electronic devices at night. These findings also provide a reference for the design of interactive interfaces such as tablets and mobile phones, and have practical implications for reducing visual fatigue.

Journal ArticleDOI
22 Jan 2021
TL;DR: This paper found that neurons in the lateral geniculate nucleus (LGN) receiving excitatory retinal input from one eye can be suppressed by high-contrast visual stimulation of the other eye, indicating that the LGN serves as a pre-cortical site of binocular processing.
Abstract: The lateral geniculate nucleus (LGN) of the dorsal thalamus is the primary recipient of the two eyes’ outputs. Most LGN neurons are monocular in that they are activated by visual stimulation through only one (dominant) eye. However, there are both intrinsic connections and inputs from binocular structures to the LGN that could provide these neurons with signals originating from the other (non-dominant) eye. Indeed, previous work introducing luminance differences across the eyes or using a single-contrast stimulus showed binocular modulation for single unit activity in anesthetized macaques and multiunit activity in awake macaques. Here, we sought to determine the influence of contrast viewed by both the non-dominant and dominant eyes on LGN single-unit responses in awake macaques. To do this, we adjusted each eye’s signal strength by independently varying the contrast of stimuli presented to the two eyes. Specifically, we recorded LGN single unit spiking activity in two awake macaques while they viewed drifting gratings of varying contrast. We found that LGN neurons of all types (parvo-, magno-, and koniocellular) were significantly suppressed when stimuli were presented at low contrast to the dominant eye and at high contrast to the non-dominant eye. Further, the inputs of the two eyes showed antagonistic interaction, whereby the magnitude of binocular suppression diminished with high contrast in the dominant eye, or low contrast in the non-dominant eye. These results suggest that the LGN represents a site of pre-cortical binocular processing involved in resolving discrepant contrast differences between the eyes. Significance A fundamental feature of the primate visual system is its binocular arrangement, which affords stereovision and hyperacuity. A consequence of this arrangement is that the two eyes’ views need to be resolved to yield singular vision, which is normally accomplished by fusion or suppression of one of the eye’s inputs. This binocular processing has been shown to occur in cortex, subsequent to thalamic processing. Here, we show that neurons in the lateral geniculate nucleus receiving excitatory retinal input from one eye can be suppressed by high-contrast visual stimulation of the other eye, indicating that the geniculate serves as a pre-cortical site of binocular processing.

Journal ArticleDOI
12 Jun 2021
TL;DR: In this article, the development of materials with a large luminescence change (Δλem>10... ) was studied for sensor, security, and memory applications; however, the development was limited to high contrast mechanochromic (MC) materials.
Abstract: High-contrast mechanochromic (MC) materials are prominent candidates for sensor, security, and memory applications; however, the development of materials with a large luminescence change (Δλem > 10...

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel enhancement framework to correct luminance guided by weighted least squares (WLS), including the following key parts: first, an image is separated into a base layer and a detail layer by employing the WLS.
Abstract: Low/high or uneven luminance results in low contrast of remotely sensed images (RSIs), which makes it challenging to analyze their contents. In order to improve the contrast and preserving fine weak details of RSIs, this letter proposes a novel enhancement framework to correct luminance guided by weighted least squares (WLS), including the following key parts. First, an image is separated into a base layer and a detail layer by employing the WLS. Then, a learning network is proposed to correct luminance for the base layer enhancement. Next, an enhancement operator for improving the detail layer is computed by using the original image and the enhanced base layer. Finally, the output image is obtained with a fusion of the enhanced base and detail components. Both quantitatively and qualitatively experimental results verify that the proposed method performs better than the state of the arts in contrast improvement and detail preservation.

Journal ArticleDOI
TL;DR: This work proposes a multifeature fusion method (MFFM) to improve the color distortion and low contrast of underwater images and can balance the color and contrast and improve the visibility of underwater images.
Abstract: Light is scattered and absorbed when it is transmitted through seawater Underwater images captured in water suffer from color attenuation, low contrast, and blurred details To address these effects, we propose a multifeature fusion method (MFFM) to improve the color distortion and low contrast of underwater images White balance technology is used to address the color cast of underwater images and obtain a color correction image On this basis, the color correction image is enhanced by a guided filter to obtain a contrast-enhanced image Four feature weights of the input images are calculated separately, and two normalized weight maps are constructed by normalization To avoid a halo, the two normalized weight maps are multiscale fused To improve the edge detail, the fused image is enhanced by a color adjustment technique to obtain the final enhanced image Real underwater images are used for simulation and application testing Compared with the current state-of-the-art methods, our method can balance the color and contrast and improve the visibility of underwater images Extensive qualitative and quantitative experiments verify the superiority of the MFFM

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed an adaptive color and contrast enhancement, and denoising (ACCE-D) framework for underwater image enhancement in which Gaussian filter and bilateral filter are respectively employed to decompose the high-frequency and low-frequency components.
Abstract: Images captured underwater are often characterized by low contrast, color distortion, and noise To address these visual degradations, we propose a novel scheme by constructing an adaptive color and contrast enhancement, and denoising (ACCE-D) framework for underwater image enhancement In the proposed framework, Gaussian filter and Bilateral filter are respectively employed to decompose the high-frequency and low-frequency components Benefited from this separation, we utilize soft-thresholding operation to suppress the noise in the high-frequency component Accordingly, the low-frequency component is enhanced by using an adaptive color and contrast enhancement (ACCE) strategy The proposed ACCE is a new adaptive variational framework implemented in the HSI color space, in which we design a Gaussian weight function and a Heaviside function to adaptively adjust the role of data item and regularized item Moreover, we derive a numerical solution for ACCE, and adopt a pyramid-based strategy to accelerate the solving procedure Experimental results demonstrate that our strategy is effective in color correction, visibility improvement, and detail revealing Comparison with state-of-the-art techniques also validate the superiority of propose method Furthermore, we have verified the utility of our proposed ACCE-D for enhancing other types of degraded scenes, including foggy scene, sandstorm scene and low-light scene

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate spatial linear dark field control (LDFC) with an asymmetric pupil vector apodizing phase plate (APvAPP) coronagraph as a method to sense time-varying NCPA using the science image as a secondary wavefront sensor (WFS) running behind the primary AO system.
Abstract: Context. One of the key challenges facing direct exoplanet imaging is the continuous maintenance of the region of high contrast within which light from the exoplanet can be detected above the stellar noise. In high-contrast imaging systems, the dominant source of aberrations is the residual wavefront error that arises due to non-common path aberrations (NCPA) to which the primary adaptive optics (AO) system is inherently blind. Slow variations in the NCPA generate quasi-static speckles in the post-AO corrected coronagraphic image resulting in the degradation of the high-contrast dark hole created by the coronagraph.Aims. In this paper, we demonstrate spatial linear dark field control (LDFC) with an asymmetric pupil vector apodizing phase plate (APvAPP) coronagraph as a method to sense time-varying NCPA using the science image as a secondary wavefront sensor (WFS) running behind the primary AO system. By using the science image as a WFS, the NCPA to which the primary AO system is blind can be measured with high sensitivity and corrected, thereby suppressing the quasi-static speckles which corrupt the high contrast within the dark hole.Methods. On the Subaru Coronagraphic Extreme Adaptive Optics instrument (SCExAO), one of the coronagraphic modes is an APvAPP which generates two PSFs, each with a 180° D-shaped dark hole with approximately 10−4 contrast at λ = 1550 nm. The APvAPP was utilized to first remove the instrumental NCPA in the system and increase the high contrast within the dark holes. Spatial LDFC was then operated in closed-loop to maintain this high contrast in the presence of a temporally-correlated, evolving phase aberration with a root-mean-square wavefront error of 80 nm. In the tests shown here, an internal laser source was used, and the deformable mirror was used both to introduce random phase aberrations into the system and to then correct them with LDFC in closed-loop operation.Results. The results presented here demonstrate the ability of the APvAPP combined with spatial LDFC to sense aberrations in the high amplitude regime (∼80 nm). With LDFC operating in closed-loop, the dark hole is returned to its initial contrast and then maintained in the presence of a temporally-evolving phase aberration. We calculated the contrast in 1 λ /D spatial frequency bins in both open-loop and closed-loop operation, and compared the measured contrast in these two cases. This comparison shows that with LDFC operating in closed-loop, there is a factor of ∼3x improvement (approximately a half magnitude) in contrast across the full dark hole extent from 2−10 λ /D. This improvement is maintained over the full duration (10 000 iterations) of the injected temporally-correlated, evolving phase aberration.Conclusions. This work marks the first deployment of spatial LDFC on an active high-contrast imaging instrument. Our SCExAO testbed results show that the combination of the APvAPP with LDFC provides a powerful new focal plane wavefront sensing technique by which high-contrast imaging systems can maintain high contrast during long observations. This conclusion is further supported by a noise analysis of LDFC’s performance with the APvAPP in simulation.

Journal ArticleDOI
TL;DR: In this paper, a Naka-Rushton function modified to include luminance range and light-dark polarity accurately replicates both the statistics of light and dark features in natural scenes and cortical responses to multiple combinations of contrast and luminance.

Journal ArticleDOI
TL;DR: In this study, a novel image contrast enhancement method, called low dynamic range histogram equalization (LDR-HE), is proposed based on the Quantized Discrete Haar Wavelet Transform (HWT), which provides a scalable and controlled dynamic range reduction in the histograms when the inverse operation is done in the reconstruction phase in order to regulate the excessive contrast enhancement rate.
Abstract: Conventional contrast enhancement methods stretch histogram bins to provide a uniform distribution. However, they also stretch the existing natural noises which cause abnormal distributions and annoying artifacts. Histogram equalization should mostly be performed in low dynamic range (LDR) in which noises are generally distributed in high dynamic range (HDR). In this study, a novel image contrast enhancement method, called low dynamic range histogram equalization (LDR-HE), is proposed based on the Quantized Discrete Haar Wavelet Transform (HWT). In the frequency domain, LDR-HE performs a de-boosting operation on the high-pass channel by stretching the high frequencies of the probability mass function to the nearby zero. For this purpose, greater amplitudes than the absolute mean frequency in the high pass band are divided by a hyper alpha parameter. This damping parameter, which regulates the global contrast on the processed image, is the coefficient of variations of high frequencies, i.e., standard deviation divided by mean. This fundamental procedure of LDR-HE definitely provides a scalable and controlled dynamic range reduction in the histograms when the inverse operation is done in the reconstruction phase in order to regulate the excessive contrast enhancement rate. In the experimental studies, LDR-HE is compared with the 14 most popular local, global, adaptive, and brightness preserving histogram equalization methods. Experimental studies qualitatively and quantitatively show promising and encouraging results in terms of different quality measurement metrics such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), Contrast Improvement Index (CII), Universal Image Quality Index (UIQ), Quality-aware Relative Contrast Measure (QRCM), and Absolute Mean Brightness Error (AMBE). These results are not only assessed through qualitative visual observations but are also benchmarked with the state-of-the-art quantitative performance metrics.

Journal ArticleDOI
TL;DR: In this article, the authors evaluated the quick contrast sensitivity function (qCSF) method to measure visual function in eyes with macular disease and good letter acuity (VA ≥ 20/30).
Abstract: Introduction Contrast sensitivity function (CSF) may better estimate a patient’s visual function compared with visual acuity (VA) Our study evaluates the quick CSF (qCSF) method to measure visual function in eyes with macular disease and good letter acuity Methods Patients with maculopathies (retinal vein occlusion, macula-off retinal detachment, dry age-related macular degeneration and wet age-related macular degeneration) and good letter acuity (VA ≥20/30) were included The qCSF method uses an intelligent algorithm to measure CSF across multiple spatial frequencies All maculopathy eyes combined and individual macular disease groups were compared with healthy control eyes Main outcomes included area under the log CSF (AULCSF) and six CS thresholds ranging from 1 cycle per degree (cpd) to 18 cpd Results 151 eyes with maculopathy and 93 control eyes with VA ≥20/30 were included The presence of a maculopathy was associated with significant reduction in AULCSF (β: −0174; p Conclusion CSF measured with the qCSF active learning method was found to be significantly reduced in eyes affected by macular disease despite good VA compared with healthy control eyes The qCSF method is a promising clinical tool to quantify subtle visual deficits that may otherwise go unrecognised by current testing methods

Journal ArticleDOI
Fushuo Huo1, Xuegui Zhu1, Hongjiang Zeng, Qifeng Liu1, Jian Qiu 
TL;DR: The main idea of the paper is combining multi-scale fusion strategy and prior knowledge, thereby presenting balanced image contrast enhancement and intrinsic color preservation, efficiently, and shows that the method is comparative to and even better than the more complex state-of-the-art techniques.
Abstract: Haze can seriously affect the visible and visual quality of outdoor optical sensor systems, e.g., driving assistance, remote sensing, and video surveillance. Single image dehazing is an intractable problem due to its ill-posed nature. The main idea of the paper is combining multi-scale fusion strategy and prior knowledge, thereby presenting balanced image contrast enhancement and intrinsic color preservation, efficiently. The atmospheric illumination prior (AIP) has been proved that haze mainly degrades the contrast of the luminance channel rather than chrominance channels in YCrCb colorspace. To this end, we firstly identify and remove the color veil (unbalanced color channel) with the white balance algorithm, to reduce the influence of unbalanced color channels neglected by the AIP. Considering the new observation that hazy regions exhibit low contrast with high-intensity pixels, the dense and mild haze are enhanced by a set of histogram modification techniques, respectively. Then, with the derived inputs, multi-scale fusion based on Laplacian decomposition strategy is proposed to blend visual contrast only in the luminance channel. Without relying on complex enhancement algorithms and only dealing with one channel, the proposed method is attractive for real-time applications. Moreover, The proposed method can be directly applied to the video sequences frame by frame, alleviating visual artifacts. The simulation results show that our method is comparative to and even better than the more complex state-of-the-art techniques.