scispace - formally typeset
Search or ask a question

Showing papers by "Andrew B. Watson published in 2017"


Journal ArticleDOI
TL;DR: QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes and provides a general method to accelerate data collection in many areas of cognitive and perceptual science.
Abstract: QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

105 citations


01 Jan 2017
TL;DR: Ahumada and Watson as mentioned in this paper used visible contrast energy to predict vision thresholds for low contrast images, their local luminance based visible contrast image v(x, y) can be computed from the contrast image using an “optical” low pass filter M0(fx, fy) and an ‘inhibitory surround” high-pass filter M1(fy, h1(ft)).
Abstract: Ahumada and Watson (2013) used visible contrast energy to predict vision thresholds for visual images . For low contrast images, their local luminance based visible contrast image v(x, y) can be computed from the contrast image c(x, y) using an “optical” low pass filter M0(fx, fy) and an “inhibitory surround” low pass filter M1(fx, fy), v(x, y) = FFT-1( M0(fx, fy) (1 – a M1(fx, fy) ) FFT(c(x, y)), (1) Barten (1994) added temporal low pass filters H0(ft) and H1(ft) to form a visible contrast “movie” v(x, y, t) as v(x, y, t) = FFT-1( M0(fx, fy) H0(ft) (1 – M1(fx, fy) H1(ft)) FFT(c(x, y, t)), (2) If v(x, y, t) is masked by white noise, the detection performance of an ideal observer is a function only of the noise level and the signal energy, Ev =ʃ ʃ ʃ v(x, y, z)2 dx dy dt = ||v(x, y, z)||. Letting V(fx, fy, ft)=FFT(v(x, y, t), the final inverse need not be computed since Ev = ||V(fx, fy, ft)|| = ʃ ʃ ʃ |V(fx, fy, ft)| 2 dfx dfy dft if the FFT is appropriately normalized. Ev = ||M0(fx, fy) H0(ft) (1 – M1(fx, fy) H1(ft)) C(fx, fy, ft) ||, (3) where C(fx, fy, ft)=FFT(c(x, y, t). When the image sequence is space-time separable, C(fx, fy, ft) = CXY(fx, fy) CT(ft), and, dropping the f's for clarity, Ev = ||(M0 CXY) (H0 CT) – (M0 M1 CXY) (H0 H1 CT)||, (4)

1 citations