Automated quality assessment of retinal fundus photos
read more
Citations
Color Retinal Image Enhancement Based on Luminosity and Contrast Adjustment
Automatic no-reference quality assessment for retinal fundus images using vessel segmentation
Retinal image quality assessment using generic image quality indicators
Identification of suitable fundus images using automated quality assessment methods.
Retinal image quality assessment using deep learning.
References
Textural Features for Image Classification
Image quality measures and their performance
Why is image quality assessment so difficult
Statistical evaluation of image quality measures
Automated detection of diabetic retinopathy on digital fundus images.
Related Papers (5)
Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening
Frequently Asked Questions (9)
Q2. How do the authors calculate the gradient magnitude of the input image?
High gradients identifying sharp edges the authors calculate the gradient magnitude image G of the input image The authorby combining the derivative Ix in x-direction and the derivative
Q3. What is the p-value of the criteria for the Haralick features?
The criteria are based on the recognizability and dissimilarity of certain structures in the eye background as well as on illumination homogeneity and sharpness.
Q4. How long does it take to compute the Haralick features?
The average computation time is 0.8 seconds for the sharpness metrics, 2.2 seconds for the clustering-features and 2.4 seconds for the Haralick features on an Intel Core 2 Duo Quad Q9550 system with 2.4 GHz and 3 GB RAM.
Q5. What is the metric used to evaluate the quality of the image?
h1 = 14 ∑ r hr1 (15)h2 = 14 ∑ r hr2 (16)h3 = 14 ∑ r hr3 (17)Thus, texture statistics are used to calculate generic quality features, entropy h1 for common image sharpness, energy h2 for image homogeneity and contrast h3.2.4 Feature Composition
Q6. How many subsets were used for each experiment?
Five subsets consisted of 6 bad and 24 good images, four subsets of 7 bad and 23 good images and one subset of 7 bad and 24 good images.
Q7. What is the p-value for the proposed method?
3.2 ResultsFor quantifying the performance of the proposed method the authors calculated the area under the ROC curve (AUC), the p-value related to ISC and the p-value related to the final feature combination of Haralick, clustering and sharpness features.
Q8. i, j, r = p(i, j, r?
P (i, j, 0◦) = #{(a, x) ∈ [1, . . . , n], (b, y) ∈ [1, . . . ,m] | gab = i, gxy = j, a− x = 0, |b− y| = 1}(7)P (i, j, 45◦) = #{(a, x) ∈ [1, . . . , n], (b, y) ∈ [1, . . . ,m] | gab = i, gxy = j, (a− x = 1, b− y = −1)∨(a− x = −1, b− y = 1)} (8)P (i, j, 90◦) = #{(a, x) ∈ [1, . . . , n], (b, y) ∈ [1, . . . ,m] | gab = i, gxy = j, |a− x| = 1, b− y = 0}(9)P (i, j, 135◦) = #{(a, x) ∈ [1, . . . , n], (b, y) ∈ [1, . . . ,m] | gab = i, gxy = j, (a− x = 1, b− y = 1)∨(a− x = −1, b− y = −1)} (10)Each matrix entry is normalized by the total number of neighbored pixel pairs in its certain direction Nr.p(i, j, r) = P (i, j, r)Nr (11)Based on the four co-occurrence matrices entropy hr1, energy hr2 and contrast h r 3 are calculated for each direction r. hr1 = − m·n∑ i=1 m·n∑ j=1 p(i, j, r)log(p(i, j, r)) (12)hr2 = m·n∑ i=1 m·n∑ j=1 p(i, j, r) 2(13)hr3 = m·n−1∑ l=0 l2{ m·n∑ i=1 m·n∑ j=1|i−j|=lp(i, j, r)} (14)The final Haralick features h1, h2 and h3 are generated by computing the mean of all directions.
Q9. What was the parameter set for each method?
The variance γ of the radial basis kernel and the penalty factor C were calculated using a grid search strategy in order to find the best parameter set for each method.