scispace - formally typeset
Open AccessProceedings ArticleDOI

Underwater color constancy: enhancement of automatic live fish recognition

Reads0
Chats0
TLDR
The proposed color correction method is based on ACE model, an unsupervised color equalization algorithm, a perceptual approach inspired by some adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy.
Abstract
We present in this paper some advances in color restoration of underwater images, especially with regard to the strong and non uniform color cast which is typical of underwater images. The proposed color correction method is based on ACE model, an unsupervised color equalization algorithm. ACE is a perceptual approach inspired by some adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy. A perceptual approach presents a lot of advantages: it is unsupervised, robust and has local filtering properties, that lead to more effective results. The restored images give better results when displayed or processed (fish segmentation and feature extraction). The presented preliminary results are satisfying and promising.

read more

Content maybe subject to copyright    Report

HAL Id: hal-00263734
https://hal.archives-ouvertes.fr/hal-00263734
Submitted on 13 Mar 2008
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Underwater Color Constancy : Enhancement of
Automatic Live Fish Recognition
M. Chambah, D. Semani, Arnaud Renouf, P. Coutellemont, A. Rizzi
To cite this version:
M. Chambah, D. Semani, Arnaud Renouf, P. Coutellemont, A. Rizzi. Underwater Color Constancy :
Enhancement of Automatic Live Fish Recognition. 16th Annual symposium on electronic imaging,
2004, Inconnue, United States. pp.157-168. �hal-00263734�

Underwater Color Constancy :
Enhancement of Automatic Live Fish Recognition
M. Chambah, D. Semani, A. Renouf, P. Courtellemont, A. Rizzi*
L3I, Université de La Rochelle, France
E-mail : {mchambah, dsemani, arenouf, pcourtel}@univ-lr.fr
*Dept. of Information Technology - University of Milano/Italy
E-mail: rizzi@dti.unimi.it
ABSTRACT
We present in this paper some advances in color restoration of underwater images, especially with regard to the
strong and non uniform color cast which is typical of underwater images. The proposed color correction method is
based on ACE model, an unsupervised color equalization algorithm. ACE is a perceptual approach inspired by some
adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy. A
perceptual approach presents a lot of advantages: it is unsupervised, robust and has local filtering properties, that lead
to more effective results. The restored images give better results when displayed or processed (fish segmentation and
feature extraction). The presented preliminary results are satisfying and promising.
Keywords: Automatic fish species recognition, underwater imaging, color constancy, automatic color correction,
color pattern recognition, color features.
1. THE AQU@THEQUE PROJECT
The Aqu@thèque project consists in developing an information system dedicated to aquariums. The fish tanks are
filmed in live by a remote video camera. The system allows an aquarium visitor to remote control the camera and
select a fish of his choice on an interactive interface by pointing out the fish on a touch screen.
Then the selected fish is displayed on the screen and automatically identified using a real-time recognition method.
Educational information and a virtual representation of the identified fish in its natural environment are also
displayed in real time on another screen.
Fig. 1 : The aqu@thèque interactive environment

The project is based on an automatic recognition method, an educational information retrieval system and a
behavioral modeling system of virtual fish. In this paper, we focus on the real-time recognition part of the system and
especially on color correction and color cast removal from the videos due to the aquatic environment.
The recognition system consists in three steps:
- A segmentation step that extracts the main regions corresponding to fishes in video sequences.
- A feature extraction step based on the segmentation results.
- A fish classification step among the different species present in the tank.
The works that preceded this article focused, on one hand, on the segmentation process, feature extraction step and
the classification step [1][2][7]. On the other hand, an algorithm for digital images unsupervised enhancement, called
ACE for Automatic Color Equalization was also presented [4][5][10]. It provided experimental evidence for
correcting automatically the color balance of an image using a perceptual approach.
We present in this paper the preliminary results of a new technique for underwater images color restoration based on
ACE. This work is a collaboration between the L3I lab of Université de La Rochelle and the Department of
Information Technology of University of Milano.
After comparing the results of some usual color constancy methods with the results of our proposed technique, we
present the impact of the color correction on the next steps i.e. the segmentation and the fish classification step. We
report and discuss the results of our experiments and we give some future prospects of this work.
2. UNDERWATER COLOR CONSTANCY
The videos taken in aquatic environment present a strong and non uniform color cast. It means that the cast has
different color and intensity in foreground, background (more depth), shadows and highlights. This cast has to be
removed.
The images taken from the tanks of the aquarium can be mainly of three kinds. The first one is an image with an
overall view with different species of fish. This kind of images is the most chromatically diverse (see fig. 3). The
second kind consists in an image with some fish belonging to the same species (see fig. 15). The third kind consists
in image with a close view of one fish (see fig. 19). The latter type of images is the less chromatically diverse and
hence the hardest to balance.
Since there is no universal color constancy method, we retain, for the experiments, the non-use of a priori
information since the non-supervision is an important factor in live videos, and the applicability of the method on
natural scenes, as selection criteria among the several existing methods of color constancy.
Among the methods responding to these criteria there are gray world (GW) and Retinex white patch (WP), which are
widely used and often give good results. These methods however, are designed to remove the color cast caused by an
illuminant shift, while we want to correct the cast due to an aquatic environment plus an artificial illumination in this
environment. Other retained methods are the GW/WP hybrid method [3] and the ACE method [4][5][10].
The GW/WP hybrid method is designed to handle more than a cast in the same image. It consists in a progressive
combination of GW and WP to estimate and correct color cast in highlights, mid-tones and shadows.
The ACE method, for Automatic Color Equalization, is an algorithm for digital images unsupervised enhancement. It
is based on a new computational approach that merges the Gray World and White Patch equalization mechanisms,
while taking into account the spatial distribution of color information. It is inspired by some adaptation mechanisms
of the human visual system (HVS), in particular lightness constancy and color constancy. Lightness constancy makes
us stably perceive the scene regardless changes in mean luminance intensity and color constancy makes us stably
perceive the scene regardless changes in color of the illuminant.
ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment
efficaciously.

2.1 Experimental results
For these experiments we test each color constancy method on each of the three kinds of image described above
(from overview to close-up), varying in this way the chromatic diversity of the images.
The image of fig. 3.a represents different fish species in an overall view. It has different casts (depending on the
water depth and the illuminant) but especially a strong cyan-green cast as shown by the hue histogram of fig. 3.b.
The RGB histogram of the image (fig. 9) has a poor dynamic range and clipped values, hence the necessity of
enhancement of the image.
The GW method gives an overall color balanced image, since the original image is chromatically diverse (fig. 4.a),
but the obtained image is not totally chromatically diverse, there is a slight reddish cast (fig. 4.b), the background
fish have some cast left and the image has a poor contrast and is oversaturated (fig. 10).
Since the original image is oversaturated (some clipped values at the end of the histogram), the WP method does not
change much the original one (fig. 6 and 12).
The GW/WP hybrid method gives a result similar to GW (fig. 5 and 11), since WP has no effect on the image.
Inner parameters of the ACE algorithm were properly tuned to meet the requirements of image and histogram shape
naturalness and to deal with this kind of aquatic images. We chose 0.2 for saturation parameter as a trade off between
a good Color Constancy behavior and a solution that does not amplify excessively the original low contrast
background noise. We experimented the algorithm with the “keep original gray” (fade to black) feature switched on
and off. The “keep original gray” feature has been devised to relax the GW mechanism in the second stage: instead
of centering the chromatic channels around the medium gray, “keep original gray” preserve the original mean values;
this results in histograms more similar in shape with original ones.
With the “keep original gray” feature enabled, the background cast still remains (fig. 7) due to the strong original
color cast. But the RGB histogram (fig. 13) of the image has a good dynamic and a natural shape with no effects of
equalization.
The best result is obtained when the “keep original gray” feature is disabled. In fact, all the casts are removed from
the image (fig. 8.a), the image is chromatically diverse (fig. 8.b) and the histogram shows a good dynamic and a
natural shape (see fig. 14).
The image of fig. 15.a represents some fish belonging to the same species. It has a strong green-cyan cast as shown
by the hue histogram on fig. 15.b.
The GW method estimates the most prominent cast in the image, since it is based on the mean of the image. As a
consequence, it corrects the mid-tones and the shadows of the image, i.e. especially the background of the image, as
illustrated by fig. 16.a, inverting the color cast in the foreground, hence the magenta cast (fig. 16.b) present in the
foreground of the image.
Since the WP has no effect on the image, the GW/WP hybrid method gives similar results to GW with less reverse
magenta cast (fig. 17.a).
The ACE method (without “keep original gray” feature) gives a correct color of fish (fig. 18.a), a good chromatic
diversity (fig. 18.b) and much less reverse cast than the other methods.
The image of fig. 19.a represents a close-up of one fish. It has a poor chromatic diversity and it is hence hard to
correct. As for fig. 15.a, GW corrects the most prominent cast in the image and inverts the cast in the foreground
(fig. 20). Since the WP has nearly no effect on the image (fig. 22), the GW/WP hybrid method gives similar results
to GW with less reverse magenta cast (fig. 21.a). The ACE method (without “keep original gray” feature) gives the
best result with a correct color of fish (fig. 23.a), a good chromatic diversity (fig. 23.b) and much less reverse cast
than the other methods.

As a conclusion, usual color constancy methods such as gray world and white patch are not efficient for such kind of
images, since they are global and thus able to handle only uniform color casts. The GW method estimates the most
prominent cast in the image, since it is based on the mean of the image. As a consequence, it corrects the mid-tones
and the shadows of the image, inverting the color cast in the foreground. The white patch method corrects the
highlights. This result is logical since this method estimates the cast in highlights. The shadows and mid-tones still
suffer from a cast since the cast is non uniform. Moreover, WP is very sensitive to noise and to clipping. The videos
of the fish tanks can be over saturated (so some are clipped), depending of the variable lighting conditions, so this
method is not suitable to underwater videos color correction.
ACE gives the best results since it widely adapts to different non uniform color casts and it is unsupervised.
3. IMPACT ON THE FISH SEGMENTATION STEP
Original image Segmented image Labeled image
Fig. 2 : The segmentation step
The segmentation step [1-2] is a decisive step for the correct recognition of fish. It consists in a background
subtraction. Typically, a video sequence includes a static background and moving fish translating and warping along
consecutive frames. This phase is to breaks up an image into regions of interest representing the fish.
We compared the segmentation results on original videos and on corrected videos with the proposed method (ACE
without “keep original gray feature). The fish are better localized on the corrected videos than on the original ones
as shown by the figs 24 and 25. Some very small false detections are noticed, but they can easily discarded using a
threshold.
The color cast removal enhances the fish segmentation step, since it emphasizes the color differences between the
fish and the background. Moreover, the color features extracted from the balanced images are closer to the real
colors of the fish [5].
The segmentation of an aquatic environment is a very difficult issue due to the variability of illumination and to the
consequent color cast. In the segmentation and the color features extraction steps, assumptions could be made
concerning the constancy of the illuminant and the color cast. These assumptions may be true as long as the
recognition and the learning take place in the same fish tank and from the same position, but they are no longer true
if the learning and the recognition are done on different fish tanks or from different positions. Hence the importance
of a color balancing and color cast removal from the live video sequences before the fish recognition step performed
without any a priori information or assumption about the scene.
4. IMPACT ON THE FISH RECOGNITION STEP
Among the features extracted (during the second step) for the classification and recognition of fishes there are
geometric features (e.g. area, perimeter, roundness ratio, elongation, orientation), color features (e.g. hue, gray levels,
color histograms, chrominance values), texture features (e.g. entropy, correlation) and motion features. We integrated
also some new color features, such as generalized color moments and color correlograms [8-9].
In order to eliminate not useful or redundant features, a feature selection step based on an ambiguity measure is
applied to select the most pertinent ones [7]. Feature reduction aims to make the classification process easier and to
speed up the recognition step up in order to achieve a real-time processing. Finally, the quadratic Bayes classifier is
used to classify the selected fishes to one of the learned species.

Citations
More filters
Journal ArticleDOI

Human-Visual-System-Inspired Underwater Image Quality Measures

TL;DR: A new nonreference underwater image quality measure (UIQM) is presented, which comprises three underwater image attribute measures selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs).
Journal ArticleDOI

Color Balance and Fusion for Underwater Image Enhancement

TL;DR: This work introduces an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption by building on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image.
Journal ArticleDOI

Automatic Red-Channel underwater image restoration

TL;DR: A Red Channel method is proposed, where colors associated to short wavelengths are recovered, as expected for underwater images, leading to a recovery of the lost contrast, and achieves a natural color correction and superior or equivalent visibility improvement when compared to other state-of-the-art methods.
Journal ArticleDOI

Underwater image processing: state of the art of restoration and image enhancement methods

TL;DR: Some of the most recent methods that have been specifically developed for the underwater environment are reviewed, capable of extending the range of underwater imaging, improving image contrast and resolution.
Journal ArticleDOI

Underwater image dehazing using joint trilateral filter

TL;DR: A new underwater model to compensate the attenuation discrepancy along the propagation path, and a fast joint trigonometric filtering dehazing algorithm are proposed, which are comparable to higher quality than the state-of-the-art methods by assuming in the latest image evaluation systems.
References
More filters
Proceedings ArticleDOI

Image indexing using color correlograms

TL;DR: Experimental evidence suggests that this new image feature called the color correlogram outperforms not only the traditional color histogram method but also the recently proposed histogram refinement methods for image indexing/retrieval.
Journal ArticleDOI

A new algorithm for unsupervised global and local color correction

TL;DR: A new algorithm for digital images unsupervised enhancement with simultaneous global and local effects, called ACE for Automatic Color Equalization, based on a computational model of the human visual system that merges the two basic "Gray World" and "White Patch" global equalization mechanisms.
Journal ArticleDOI

From Retinex to Automatic Color Equalization: issues in developing a new algorithm for unsupervised color equalization

TL;DR: A comparison between two color equalization algorithms: Retinex, the famous model due to Land and McCann, and Automatic Color Equalization (ACE), a new algorithm recently presented by the authors, are presented.
Book ChapterDOI

Color-Based Moment Invariants for Viewpoint and Illumination Independent Recognition of Planar Color Patterns

TL;DR: This paper contributes to the viewpoint and illumination independent recognition of planar color patterns such as labels, logos, signs, pictograms, etc by means of moment invariants by using powers of the intensities in the different color bands of a color image and combinations thereof for the construction of the moments.
Proceedings ArticleDOI

Perceptual approach for unsupervised digital color restoration of cinematographic archives

TL;DR: The proposed color correction method is based on the ACE model, an unsupervised color equalization algorithm based on a perceptual approach and inspired by some adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy.
Related Papers (5)
Frequently Asked Questions (16)
Q1. What are the contributions mentioned in the paper "Underwater color constancy : enhancement of automatic live fish recognition" ?

The authors present in this paper some advances in color restoration of underwater images, especially with regard to the strong and non uniform color cast which is typical of underwater images. The presented preliminary results are satisfying and promising. 

One of the possible future prospects of this work is to increase the recognition rate by extracting more suitable color features that describe more closely the fish. 

Among the features extracted (during the second step) for the classification and recognition of fishes there are geometric features (e.g. area, perimeter, roundness ratio, elongation, orientation), color features (e.g. hue, gray levels, color histograms, chrominance values), texture features (e.g. entropy, correlation) and motion features. 

In order to eliminate not useful or redundant features, a feature selection step based on an ambiguity measure is applied to select the most pertinent ones [7]. 

In this paper, the authors focus on the real-time recognition part of the system and especially on color correction and color cast removal from the videos due to the aquatic environment. 

It is inspired by some adaptation mechanisms of the human visual system (HVS), in particular lightness constancy and color constancy. 

The centering and reduction of the extracted features (done to avoid the impact of the color cast) is one of the reasons for the similarity of the error rate. 

The “keep original gray” feature has been devised to relax the GW mechanism in the second stage: instead of centering the chromatic channels around the medium gray, “keep original gray” preserve the original mean values; this results in histograms more similar in shape with original ones. 

Since the WP has no effect on the image, the GW/WP hybrid method gives similar results to GW with less reverse magenta cast (fig. 17.a). 

The segmentation of an aquatic environment is a very difficult issue due to the variability of illumination and to the consequent color cast. 

The ACE method (without “keep original gray” feature) gives a correct color of fish (fig. 18.a), a good chromatic diversity (fig. 18.b) and much less reverse cast than the other methods. 

One of the possible future prospects of this work is to increase the recognition rate by extracting more suitable color features that describe more closely the fish. 

The ACE method (without “keep original gray” feature) gives the best result with a correct color of fish (fig. 23.a), a good chromatic diversity (fig. 23.b) and much less reverse cast than the other methods. 

These methods however, are designed to remove the color cast caused by an illuminant shift, while the authors want to correct the cast due to an aquatic environment plus an artificial illumination in this environment. 

Since the WP has nearly no effect on the image (fig. 22), the GW/WP hybrid method gives similar results to GW with less reverse magenta cast (fig. 21.a). 

In addition to the enhancement of the displayed video, the color cast removal allows a better fish localization and segmentation.