scispace - formally typeset
Search or ask a question
Author

Craig K. Abbey

Bio: Craig K. Abbey is an academic researcher from University of California, Santa Barbara. The author has contributed to research in topics: Observer (quantum physics) & Imaging phantom. The author has an hindex of 37, co-authored 218 publications receiving 4407 citations. Previous affiliations of Craig K. Abbey include University of California, San Francisco & University of Arizona.


Papers
More filters
Journal ArticleDOI
TL;DR: Human-observer performance in several signal-known-exactly detection tasks is evaluated through psychophysical studies by using the two-alternative forced-choice method and shows that human observers are able to detect ever more subtle lesions at increased exposure times.
Abstract: We consider detection of a nodule signal profile in noisy images meant to roughly simulate the statistical properties of tomographic image reconstructions in nuclear medicine. The images have two sources of variability arising from quantum noise from the imaging process and anatomical variability in the ensemble of objects being imaged. Both of these sources of variability are simulated by a stationary Gaussian random process. Sample images from this process are generated by filtering white-noise images. Human-observer performance in several signal-known-exactly detection tasks is evaluated through psychophysical studies by using the two-alternative forced-choice method. The tasks considered investigate parameters of the images that influence both the signal profile and pixel-to-pixel correlations in the images. The effect of low-pass filtering is investigated as an approximation to regularization implemented by image-reconstruction algorithms. The relative magnitudes of the quantum and the anatomical variability are investigated as an approximation to the effects of exposure time. Finally, we study the effect of the anatomical correlations in the form of an anatomical slope as an approximation to the effects of different tissue types. Human-observer performance is compared with the performance of a number of model observers computed directly from the ensemble statistics of the images used in the experiments for the purpose of finding predictive models. The model observers investigated include a number of nonprewhitening observers, the Hotelling observer (which is equivalent to the ideal observer for these studies), and six implementations of channelized-Hotelling observers. The human observers demonstrate large effects across the experimental parameters investigated. In the regularization study, performance exhibits a mild peak at intermediate levels of regularization before degrading at higher levels. The exposure-time study shows that human observers are able to detect ever more subtle lesions at increased exposure times. The anatomical slope study shows that human-observer performance degrades as anatomical variability extends into higher spatial frequencies. Of the observers tested, the channelized-Hotelling observers best capture the features of the human data.

332 citations

Journal ArticleDOI
TL;DR: All moments of both the likelihood and the log likelihood under both hypotheses can be derived from this one function, and the AUC can be expressed, to an excellent approximation, in terms of the likelihood-generating function evaluated at the origin.
Abstract: We continue the theme of previous papers [J. Opt. Soc. Am. A 7, 1266 (1990); 12, 834 (1995)] on objective (task-based) assessment of image quality. We concentrate on signal-detection tasks and figures of merit related to the ROC (receiver operating characteristic) curve. Many different expressions for the area under an ROC curve (AUC) are derived for an arbitrary discriminant function, with different assumptions on what information about the discriminant function is available. In particular, it is shown that AUC can be expressed by a principal-value integral that involves the characteristic functions of the discriminant. Then the discussion is specialized to the ideal observer, defined as one who uses the likelihood ratio (or some monotonic transformation of it, such as its logarithm) as the discriminant function. The properties of the ideal observer are examined from first principles. Several strong constraints on the moments of the likelihood ratio or the log likelihood are derived, and it is shown that the probability density functions for these test statistics are intimately related. In particular, some surprising results are presented for the case in which the log likelihood is normally distributed under one hypothesis. To unify these considerations, a new quantity called the likelihood-generating function is defined. It is shown that all moments of both the likelihood and the log likelihood under both hypotheses can be derived from this one function. Moreover, the AUC can be expressed, to an excellent approximation, in terms of the likelihood-generating function evaluated at the origin. This expression is the leading term in an asymptotic expansion of the AUC; it is exact whenever the likelihood-generating function behaves linearly near the origin. It is also shown that the likelihood-generating function at the origin sets a lower bound on the AUC in all cases.

258 citations

Journal ArticleDOI
TL;DR: In this article, the authors measured human observers' detectability of aperiodic signals in noise with two components (white and low-pass Gaussian) and found that the signal detection task was always noise limited rather than contrast limited (i.e., image noise was always much larger than observer internal noise).
Abstract: We measured human observers' detectability of aperiodic signals in noise with two components (white and low-pass Gaussian). The white-noise component ensured that the signal detection task was always noise limited rather than contrast limited (i.e., image noise was always much larger than observer internal noise). The low-pass component can be considered to be a statistically defined background. Contrast threshold elevation was not linearly related to the rms background contrast. Our results gave power-law exponents near 0.6, similar to that found for deterministic masking. The Fisher-Hotelling linear discriminant model assessed by Rolland and Barrett [J. Opt. Soc. Am. A 9, 649 (1992)] and the modified nonprewhitening matched filter model suggested by Burgess [J. Opt. Soc. Am. A 11, 1237 (1994)] for describing signal detection in statistically defined backgrounds did not fit our more precise data. We show that it is not possible to find any nonprewhitening model that can fit our data. We investigated modified Fisher-Hotelling models by using spatial-frequency channels, as suggested by Myers and Barrett [J. Opt. Soc. Am. A 4, 2447 (1987)]. Two of these models did give good fits to our data, which suggests that we may be able to do partial prewhitening of image noise.

184 citations

Journal ArticleDOI
TL;DR: A difference in the magnitude of the classification images is found, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.
Abstract: In the Posner cueing paradigm, observers' performance in detecting a target is typically better in trials in which the target is present at the cued location than in trials in which the target appears at the uncued location. This effect can be explained in terms of a Bayesian observer where visual attention simply weights the information differently at the cued (attended) and uncued (unattended) locations without a change in the quality of processing at each location. Alternatively, it could also be explained in terms of visual attention changing the shape of the perceptual filter at the cued location. In this study, we use the classification image technique to compare the human perceptual filters at the cued and uncued locations in a contrast discrimination task. We did not find statistically significant differences between the shapes of the inferred perceptual filters across the two locations, nor did the observed differences account for the measured cueing effects in human observers. Instead, we found a difference in the magnitude of the classification images, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.

161 citations

Patent
30 Jun 2000
TL;DR: In this article, a method for image guidance of coronary stent deployment using radiopaque markers and the image processing technique of moving layer decomposition was proposed, where the stents are attached to guidewires or delivery balloons that are used to place the stent and co-move with the coronary vessel.
Abstract: A method for image guidance of coronary stent deployment using radiopaque markers and the image processing technique of moving layer decomposition. The radiopaque markers are attached to guidewires or delivery balloons that are used to place the stent and co-moves with the coronary vessel. A series of fluoroscopic images are taken during the stent placement and are used to generate layer images which represent different structures in the angiograms, such as the stent and guidewires, background structures, etc. The clearly visible images of the markers are used in the layer decomposition. Although stents are less radiopaque than the markers, visibility of previously deployed stents is also enhanced in the layer images. The layer images are used to guide placement of multiple stents to prevent overlap or gaps between the stents. After stent expansion, angiographic images are acquired of the lumen filled with liquid contrast agent. Layer decomposition is applied to these images in order to visually or quantitatively determine the lumen narrowing (or broadening) in the stented region.

131 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A look at progress in the field over the last 20 years is looked at and some of the challenges that remain for the years to come are suggested.
Abstract: The analysis of medical images has been woven into the fabric of the pattern analysis and machine intelligence (PAMI) community since the earliest days of these Transactions. Initially, the efforts in this area were seen as applying pattern analysis and computer vision techniques to another interesting dataset. However, over the last two to three decades, the unique nature of the problems presented within this area of study have led to the development of a new discipline in its own right. Examples of these include: the types of image information that are acquired, the fully three-dimensional image data, the nonrigid nature of object motion and deformation, and the statistical variation of both the underlying normal and abnormal ground truth. In this paper, we look at progress in the field over the last 20 years and suggest some of the challenges that remain for the years to come.

4,249 citations

Journal ArticleDOI
TL;DR: A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction, using changes in specimen position at previous tilt angles to predict the position at the current tilt angle.

3,995 citations

01 Jan 2010
TL;DR: In this paper, the authors describe a scenario where a group of people are attempting to find a solution to the problem of "finding the needle in a haystack" in the environment.
Abstract: 中枢神経系疾患の治療は正常細胞(ニューロン)の機能維持を目的とするが,脳血管障害のように機能障害の原因が細胞の死滅に基づくことは多い.一方,脳腫瘍の治療においては薬物療法や放射線療法といった腫瘍細胞の死滅を目標とするものが大きな位置を占める.いずれの場合にも,細胞死の機序を理解することは各種病態や治療法の理解のうえで重要である.現在のところ最も研究の進んでいる細胞死の型はアポトーシスである.そのなかで重要な位置を占めるミトコンドリアにおける反応および抗アポトーシス因子について概要を紹介する.

2,716 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Journal ArticleDOI
TL;DR: A major challenge for neuroscientists is to test ideas for how this might be achieved in populations of neurons experimentally, and so determine whether and how neurons code information about sensory uncertainty.

2,067 citations