scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 2011"


Journal ArticleDOI
31 May 2011
TL;DR: In this article, the authors present an overview of different successful applications of density functional theory to investigate the structure, dynamics, vibrational spectra, NMR chemical shifts, hyperfine interactions, excited states, and magnetic properties of lanthanide(III) complexes.
Abstract: Density functional theory (DFT) has become a general tool to investigate the structure and properties of complicated inorganic molecules, such as lanthanide(III) coordination compounds, due to the high accuracy that can be achieved at relatively low computational cost. Herein, we present an overview of different successful applications of DFT to investigate the structure, dynamics, vibrational spectra, NMR chemical shifts, hyperfine interactions, excited states, and magnetic properties of lanthanide(III) complexes. We devote particular attention to our own work on the conformational analysis of LnIII-polyaminocarboxylate complexes. Besides, a short discussion on the different approaches used to investigate lanthanide(III) complexes, i. e. all-electron relativistic calculations and the use of relativistic effective core potentials (RECPs), is also presented. The issue of whether the 4f electrons of the lanthanides are involved in chemical bonding or not is also shortly discussed.

55 citations


Journal ArticleDOI
31 May 2011

32 citations


Proceedings Article
01 Jan 2011
TL;DR: This work presents a framework for constructing and evaluating the performance for a color version of the QR code, the most commonly deployed monochrome barcode in mobile applications, and demonstrates that the framework is effective and allows recovery of embedded data with low bit error rates.
Abstract: Monochrome 2-D barcodes have recently become extremely popular in mobile imaging applications. To accommodate higher data rates, we present a methodology for extending these barcodes to color. We use independent per channel data encoding in cyan, magenta, and yellow print colorant channels and decode the information in the complementary red, green, and blue channels used in capture, effectively increasing the data rate by a factor of three. To overcome unavoidable inter-channel interference, our framework utilizesadaptive thresholding and model-based interference cancellation at the decoder which are facilitated by an intelligent barcode design that allows estimation of parameters required for thresholding and cancellation. We present the framework by constructing and evaluating the performance for a color version of the QR code, the most commonly deployed monochrome barcode in mobile applications. Experimental results demonstrate that the framework is effective and allows recovery of embedded data with low bit error rates, which can be readily handled by the builtin error correction withinthe QRcode standard.

23 citations


Journal ArticleDOI
31 May 2011
TL;DR: The particular photo-physical specifications of this family of luminescent labels will be described, followed by the chemical and physico-chemical requirements necessary to reach high luminescence and good labelling activity.
Abstract: This chapter describes the properties of luminescent lanthanide complexes that can be used in biological solutions for luminescence based labelling applications. The particular photo-physical specifications of this family of luminescent labels will be described, followed by the chemical and physico-chemical requirements necessary to reach high luminescence and good labelling activity. Following these criteria, examples of the literature will be detailed, following a classification based on the chemically activated function used in the labelling process.

21 citations


Journal ArticleDOI
31 May 2011
TL;DR: The fundamental concepts explaining the particularly good affinity between cyclen derivatives and lanthanides and the way to sense the interaction with anionic substrates are discussed.
Abstract: This review highlights the research carried out during the last decade or so in the field of anion coordination using cyclen-based lanthanide complexes as luminescent receptors (cyclen = 1,4,7,10-tetraazacyclododecane). Herein, the fundamental concepts explaining the particularly good affinity between cyclen derivatives and lanthanides and the way to sense the interaction with anionic substrates are firstly discussed. This is followed by a selection of examples from the recent literature describing these cationic receptors as powerful optical sensors of environmental, biological and pharmaceutical relevance.

21 citations


Journal ArticleDOI
31 May 2011

18 citations



Proceedings Article
01 Jan 2011

18 citations





Proceedings Article
01 Jan 2011
TL;DR: These gamuts show that a significant part of the classical ink gamut can be reproduced by combining classical inks with daylight fluorescent inks, and can hide security patterns within printed images.
Abstract: We propose a method for hiding patterns within printed images by making use of classical and of two daylight fluorescent magenta and yellow inks. Under the D65 illuminant we establish in the CIELAB space the gamut of a classical cmyk printer and the gamut of the same printer using a combination of classical inks with daylight fluorescent inks. These gamuts show that a significant part of the classical ink gamut can be reproduced by combining classical inks with daylight fluorescent inks. By printing parts of images with a combination of classical and daylight fluorescent inks instead of using classical inks only, we can hide security patterns within printed images. Under normal daylight, we do not see any difference between the parts printed with classical inks only and the parts printed with daylight fluorescent inks and classical inks. By changing the illumination, e.g. by viewing the printed image under a tungsten lamp or under a UV lamp, the daylight fluorescent inks change their colors and reveal the security pattern formed by combinations of classical inks and of daylight fluorescent inks.



Proceedings Article
01 Jan 2011
TL;DR: TangiPaint is a digital painting application that provides the experience of working with real materials such as canvas and oil paint and represents a first step toward developing digital art media that look and behave like real materials.
Abstract: TangiPaint is a digital painting application that provides the experience of working with real materials such as canvas and oil paint. Using fingers on the touchscreen of an iPad or iPhone, users can lay down strokes of thick, three-dimensional paint on a simulated canvas. Then using the Tangible Display technology introduced by Darling and Ferwerda [1], users can tilt the display screen to see the gloss and relief or "impasto" of the simulated surface, and modify it until they get the appearance they desire. Scene lighting can also be controlled through direct gesture-based interaction. A variety of "paints" with different color and gloss properties and substrates with different textures are available and new ones can be created or imported. The tangiPaint system represents a first step toward developing digital art media that look and behave like real materials.

Proceedings Article
01 Jan 2011
TL;DR: The aim is to construct an algorithm that could be both a tone reproduction model as well as a color appearance model, and to achieve dynamic range reduction while taking human color vision into account.
Abstract: Color appearance models and tone reproduction algorithms are currently solving different problems. These classes of algorithms are also developed and used in different communities. However, they show remarkable functional similarities. Perhaps there is reason to think that they could in fact be one and the same thing. The advantages would be that we could achieve dynamic range reduction while taking human color vision into account. Vice-versa, we could predict the appearance of color over a large range of intensities. But how to overcome the differences, and how to construct an algorithm that could be both a tone reproduction model as well as a color appearance model? Introduction An imaging pipeline consists of processes to capture, store, transmit and display images and video. Traditional imaging pipelines are designed around the abilities of conventional capture and display devices, and therefore do not need dynamic range beyond what can be represented with a single byte per color channel. This situation is changing as image capture and in particular display technologies are maturing to include higher dynamic ranges [1]. High dynamic range imaging technologies produce and manipulate pixel data that conceptually consist of floating point numbers instead of 8-bit integer formats [2]. The benefit is clear: capturing data at full fidelity will lead to better imagery, even if the display device is not capable of reproducing the full dynamic range. An example is shown in Figure 1, where a single 8-bit exposure of a scene is compared with a high dynamic range (HDR) capture of the same scene. The resulting high dynamic range image was tonemapped to fit the reproduction range of paper. Note that the exposure on the left has both underand over-exposed areas. This is not uncommon and therefore a good example of the utility of high dynamic range imaging technologies. While representing pixels as floating point numbers rather than bytes may seem a minor change, there are many perceptual as well as technological aspects that require a reassessment. On the technological side, there are still many challenges. Perhaps the main one is that HDR image and video capture devices generate an enormous amount of data that would have to be managed. Standard compression algorithms are not directly amenable to HDR data [4, 5, 6, 7], with the implication that broadcast standards have yet to emerge. Second, HDR movie cameras are only just becoming available, including the Red Epic 1 and the camera by Contrast Optical Engineering [8]. Third, it is not entirely clear how much dynamic range 1http://www.red.com/ Figure 1. This scene was captured with a single exposure (left) and high dynamic range imaging technologies (right). The image on the right was tonemapped for display/print using the photographic tone reproduction operator [3]. Photograph courtesy of Tania Pouli. should be captured. While the range of illumination between starlight and bright sunlight over which the human visual system can adapt is around 10 orders of magnitude [9], it seems overkill to try and capture this full range at all times. The human visual system is able to simultaneously perceive around 4 orders of magnitude of illumination under a specific laboratory set-up [10], although in practice this number may be a bit higher. It would probably be good practice to design imaging pipelines around this number. If it is assumed that HDR imagery and video will be captured with such a dynamic range, then displays should match this capability as well. Currently, only very few displays currently come even close, the Dolby prototype displays [1] and their commercial derivatives by SIM2 2 being the exception. Print technology is inherently incapable of reaching such dynamic range due to its reflective nature. Nonetheless, it may be foreseen that display devices will soon exhibit a greater variety in dynamic range than currently available. Whether low dynamic range legacy content or high dynamic range data is sent to a display, it will need to be mapped into a format that can be handled by that given display. In particular, it will need to be tonemapped to fit the dynamic range of the display device, and should take into consideration the state of adaptation of the observers. In recent years, much progress has been achieved in the design of algorithms that map high dynamic range images to low dynamic range display devices [2, 6]. Moreover, these algorithms have been subjected to psychophysical evaluation such as preference ratings [11, 12, 13] and similarity ratings [14, 13, 15, 16]. Although several tone reproduction operators are capable at compressing dynamic range, in this paper we argue that one weakness that persists is the lack of sensible color management. In particular, it is well-known that there exist luminanceinduced appearance phenomena such as the Hunt and Stevens ef2http://www.sim2.com/ fects, the Helmholz-Kohrausch effect and the Bezold-Br ücke hue shift [17, 18] which indicate that there is a complex relationship between the perception of color and the luminance level at which colors are perceived. Currently, these effects are not generally taken into consideration in tone reproduction operators, leading to images that generally look either too vivid or too dull, and are certainly unsuitable for accurate color reproduction. On the other hand, color appearance modelling is an active area of research that has led to several models that predict the perception of color under different illumination conditions [17]. With the tristimulus values of a patch of color given, as well as a description of the environment in which it is observed, such models predict the perception of color in terms of appearance correlates, which include lightness, brightness, hue, saturation, colorfulness and chroma [19, 17, 18]. Few color appearance models are designed with high dynamic range imaging in mind, although notable exceptions exist [20, 21, 22, 23]. In particular, the models proposed by Kim et al. [22] are based on a psychophysical dataset that spans a much higher dynamic range than the psychophysical dataset that lies at the heart of most color appearance models [24]. The purpose of this paper is to argue that although tone reproduction and color appearance modelling may be addressing different problems, their aims partially overlap. Moreover, their functional similarity is unmistakable, albeit also with significant differences. This is especially the case for tone reproduction operators that model aspects of human vision. This paper catalogs the similarities and differences in order to show where the opportunities lie to construct a combined tone reproduction and color appearance model that could serve as the basis for predictive color management under a wide range of illumination conditions. It is thought that such an algorithm would benefit both fields of high dynamic range imaging as well as color imaging. To this end, the remainder of the paper begins by briefly describing the aforementioned luminance-induced appearance phenomena. Then, the structure of tone reproduction operators is outlined, insofar based on neurophysiology. These models are functionally closest to color appearance models, which are discussed next. A discussion of attempts to bring tone reproduction and color appearance modelling closer together then precedes the conclusions. Luminance Induced Appearance Phenomena The overall amount of light under which colors are observed may change the appearance of these colors. For instance, on a bright sunny day colors tend to appear more colorful than on an overcast day [18]. Several different observations have been ma de that relate to the relationship between illumination and color appearance. First, the Hunt effect states that as the luminance of a given color increases, so does its perceived colorfulness [25]. Further, perceived brightness contrast also changes with luminance, which is known as the Stevens effect [26]. Brightness itself is not only a function of luminance, but also depends on the saturation of the stimulus. This is described by the Helmholtz-Kohlrausch effect, although this effect depends on hue angle as well [27]. Finally, the perception of the hue of monochromatic light sources depends on luminance level, which is described by the Bezold-Br ücke hue Figure 2. The image on the left was tonemapped with the photographic operator [3], which compresses the luminance channel of the Yxycolor space. It therefore does not take luminance induced appearance phenomena into account. The image on the right was tonemapped using the color appearance model by Kim et al. [22]. cβ+kβE * τE τR 1/β / τX X nx

Proceedings Article
01 Jan 2011
TL;DR: A local, image dependent approach to the gamut mapping problem is studied, which has a good potential to obtain mapped images with higher perceived quality than any of the individual algorithms, on which the method is based.
Abstract: In this paper we study a local, image dependent approach to the gamut mapping problem. A structural image quality measure is used to pick an optimal mapping algorithm for image patches from a given class of algorithms. The optimally mapped patches are then fused into a single mapping of the image. We discuss and compare two image fusion methods that are designed to avoid artifacts in the fused image. Psycho-visual experiments confirm that this approach has a good potential to obtain mapped images with higher perceived quality than any of the individual algorithms, on which the method is based.



Proceedings Article
01 Jan 2011
TL;DR: This study aims to explore blackness preference and blackness perception for blacks of different hue and for observers from different cultures, and to develop ideas towards a blackness index.
Abstract: Despite the importance of black there have been relatively few studies into blackness perception; by contrast a great number of equations have been published over the last 100 years that attempt to predict perception of whiteness [1,2]. In order to understand the different series of black color, the assessment of blackness will be discussed in this experiment. In the psychophysical experiment, comparative studies on color perception (which of two black samples observers considered to be closest to a pure black) and color preference (which of two black samples observers preferred) were carried out. All color samples were evaluated by hue and analysis carried out based on gender and nationality (Chinese and UK). No effect of culture was found for blackness perception; however, for blackness preference some significant cultural effects were observed. Introduction Many black inks are made by mixing colored dyes or pigments and can have an evident hue. Therefore, there are slight hue differences between blacks although they look rather similar. It is interesting to consider, for a range of blacks of varying hues, which black would be preferred or which black would be considered to be a ‘better’ black; this latter subjective quality may be important, for example, for the design of black inks for use in inkjet printers. A rather large literature exists that address similar questions for whites [2-5] which have been studied for 100 years. MacAdam assessed whiteness both visually and instrumentally [1] and there are numerous standards for the instrumental assessment of whiteness [6]. In comparison, the assessment of blackness has received relatively little attention. The aim of this work is to partly address this and to develop ideas towards a blackness index. Black is a color, albeit one that in its purest form lacks chroma. Color results from human perception of the objective world so that various physical phenomena, physiological mechanisms and psychological effects combine to affect our perception of color. Color perception is generally regarded to have three dimensions; hue, value and chroma. Some work has been carried out to determine observers’ preferences for color, especially those of different hue. As early as 1959, Guildford and Smith asked 20 observers to judge 316 differently colored samples from the Munsell Book of Color [7]. Guildford and Smith found that observers liked blues and greens and disliked yellows. They also found that observers preferred more saturated colors. The notion that people like blue and dislike yellow has been confirmed by much research over the last 50 years. Most recently, a study asked 48 observers to rate 32 colors from the Berkeley Color Project and confirmed that blue and greens were preferred and yellows were disliked [8]. Furthermore, in the 1990s, Saito carried out a cross-culture study about color preference of Koreans, Japanese and Taiwanese but revealed no significant cultural differences [9]. This study aims to explore blackness preference and blackness perception for blacks of different hue and for observers from different cultures. Blackness preference relates to whether observers prefer one black rather than another; blackness perception relates to whether observers consider one black sample to be more black than another. Whether these two terms are distinct will also be explored by the work.

Journal ArticleDOI
30 Nov 2011
TL;DR: In this article, the first monometallic dioxomolybdenum(VI) complex containing a chiral N,N',O-tridentated ligand is reported.
Abstract: Novel cis-dioxomolybdenum(VI) complexes containing chiral ligands have been prepared and fully character- ized, including structural determinations by X-ray diffraction. The first monometallic dioxomolybdenum(VI) complex containing a chiral N,N',O-tridentated ligand is reported. The new complexes were evaluated as catalysts for epoxidation of olefins using tert-butyl hydroperoxide and H2O2 as oxidants. They were found to be efficient catalysts affording good chemoselectivity but low enantioselectivity.

Proceedings Article
01 Jan 2011
TL;DR: An experiment is conducted in which it is shown that negative polarity produces better performance in a legibility task than does positive polarity (dark text on a bright background).
Abstract: Most displays viewed in dark environments can easily cause dazzling glare and affect a viewer’s dark adaptation state (night vision). In previous work we showed that legibility could be improved and dark adaptation preserved in low-light environments by using a display design with a specially selected spectral light emission. We used long-wavelength light (red) that is easily visible to daylight vision photoreceptors (cones) but almost invisible to night vision photoreceptors (rods). In this paper we conduct an experiment in which we show that negative polarity (bright text on a dark background) produces better performance in a legibility task than does positive polarity (dark text on a bright background). Our results can serve as a guidelines for designing displays that change their color scheme at low ambient light levels.

Proceedings Article
01 Jan 2011
TL;DR: This system is built in an effort to assess the feasibility of simple retrofit strategies for abridged multispectral display from native P3 and sRGB-optimized devices, and proposes and executes a full spectral reconstruction model for reproducing target color patches under specified illumination.
Abstract: Multispectral display technology employing more than 3 primaries and utilizing spectral color reproduction image processing rather than traditional trichromatic models is key to expanding color gamut, rendering fully accurate color reproduction and minimizing observer metamerism. In the presented work, two LCD HDTV projectors are modified by optical filtration to generate 6 unique and controllable primary spectra. A full spectral reconstruction model is then proposed and executed for reproducing target color patches under specified illumination. This system is built in an effort to assess the feasibility of simple retrofit strategies for abridged multispectral display from native P3 and sRGB-optimized devices. Due to narrow spectral signatures in each of the LCD-modulated RGB primaries, spectral reconstruction and observer metamerism improvements over a simple 3-primary system are negligible. Significant improvements, however, are simulated by optimization of ideal primary spectra for specific target sets, providing basis for future system refinement. Also concerning in the constructed system are inherent spatial non-uniformities, scene-dependent flare characteristics and long-term colorimetric drift that pose several engineering challenges for a fully functional system. Introduction Traditional image display paradigms for both still and motion picture applications are rooted in a 3-primary metameric match model relying exclusively on Grassmann’s laws of additivity and the fundamental quantal catch treatment of the human visual system. Color-matching functions are employed to spectrally integrate visual stimuli, simplifying the higher order complexity of real radiometric distributions from scene colors and enabling accurate reproduction via finite scaled outputs in just a small number of primary channels. Problems in this model, though, may be encountered with restrictions to color gamut and spectral accuracy and with limitations from observer metamerism. In the former, fully characterized scene content may constitute a reproduction stimuli outside the capabilities of the traditional limited primary display device. In the latter, controlled metameric matches of color within the display for a single observer may prove to not be matches for another observer with slightly different color-matching functions. The solution to these problems lies, in part, in generating a full spectrum-based reproduction environment. In the ideal case, narrow bandpass, high spectral resolution primary sets would be conceived to accomplish the goals of controllable spectral reproduction of target stimuli. By combining near monochromatic primaries at a high sample rate across the visible electromagnetic spectrum, many sufficiently complex stimuli could be rigorously rendered. In a practical sense, however, an abridged spectral reproduction model makes more sense in both hardware design and image processing complexity, employing superimposed images from two or more traditional 3-primary projection devices whose individual primary spectra are purposefully optimized. For this work, two LCD digital projectors are used to prove the feasibility of constructing an abridged spectral reproduction display environment from P3 digital cinema-based displays. Native primary spectra from each device are modified by way of optical filtration to generate as many as 6 unique and controllable projection primaries. By careful characterization of the projectors and optimization of primary drive amounts, spectral reconstruction of simple color patch targets is achievable with the proposed system. Background and Theory Traditionally, additive electronic displays are well represented by a gain-offset-gamma (GOG) or gain-offset-gamma-offset (GOGO) model as summarized by Day, et al., to relate device drive value in each channel (analog voltage or digital drive value for example) to a radiometric scalar of the maximum channel output spectrum [1]. Via primary rotation to CIE tristimulus amounts, these scalars can further predict reproduced colorimetry in a metameric reproduction model. Owing to natural variations in ocular media transmission, photoreceptor spectral sensitivities and post-retinal mechanisms, any population of human observers will comprise a disparate set of color matching functions. Further, even single observers experience an alteration of their color matching functions with age and field of view [2]. As such, a metameric reproduction for the 1931 2° standard observer does not guarantee a similar match for any real observer [3]. For emissive displays, the only sure way to avoid all observer metamerism failure is to produce a multiprimary spectral reconstruction of the target object stimuli [4,5]. Much of the historical work progressing multiprimary display development has been promoted in the context of general gamut expansion beyond traditional 3-primary limitations with ancillary benefit to the observer metamerism problem [6,7,8]. However, Hill has specifically shown how multispectral display signal mapping may be algorithmically optimized to limit observer metamerism when there are limitations on fully accurate spectral reconstruction [9]. A rigorous multispectral reproduction system would require a narrow band primary for each level of granularity within the desired visible spectrum. This type of system is largely impractical for typical image capture, processing and reproduction workflows and so an alternative abridged spectral reproduction system will be investigated instead. Analogous abridged multispectral reproduction systems have proven successful in generating reasonable spectrum reconstruction in the fields of digital image capture and multi-ink inkjet printing [10,11,12]. In these applications a co-optimization of spectral accuracy and reduced illuminant and/or observer metamerism performance is often employed. Abridged filter-based approaches have also been used extensively in low-end spectrometers and colorimeters. Yamaguchi, et al. have demonstrated an end-to-end multispectral capture and display system employing a 16-channel digital camera and 6-channel projection display, complete with models for data management and transmission in an ICC-analogous workflow [13]. Several attempts have also been made to adapt the techniques to real-time video workflows for motion imaging applications [14]. The current work serves to explore primary spectra optimization for a 6-band display system employing available consumer LCD HDTV projectors having native primary spectra consistent with a P3 or sRGB gamut. Two projectors will be characterized and their primary spectra modified by the addition of ancillary color filters. With the proper filters, the spectral peaks of the projectors should prove separable enough to yield 6 independent color channels, appropriate for generating spectral matches to reasonably well-behaved aim spectra. Once the projectors are appropriately characterized, a basic spectral reconstruction model can be built for the 6-channel system via equation 1 (which includes baseline black signatures for each device as well). Taking advantage of presumed primary stability in a well-behaved additive system, equation 1 can be further expanded to equation 2 where the characteristic primary spectra, SPD(λ)i_max, are the absolute radiometric measures of the maximally driven primary in each projector and for each channel. Relative radiometric primary amounts in the full summation are generalized by the scaling constants, k (1x6 vector for the proposed system), which are analogous quantities to RGB radiometric scalars in the Day et al. model but defined more generically for multi-channel systems with more than 3 controllable primaries. SPD(!)mix = SPD(!)r,A + SPD(!)r,B + SPD(!)g,A + SPD(!)g,B +SPD(!)b,A + SPD(!)b,B + SPD(!)k,A + SPD(!)k,B 1 (1) SPD(!)mix = k 1 1 " # $ SPD(!)r _max,A SPD(!)r _max,B SPD(!)g_max,A SPD(!)g_max,B SPD(!)b_max,A SPD(!)b_max,B SPD(!)k,A SPD(!)k,B ! " % % % % % % % % % % % % # $ & & & & & & & & & & & & ( (2) Typically, aim spectra will be presented as an objective goal for the multiprimary display system and as such, an optimization approach can be used to determine theoretical scalars, k, needed to reproduce any target (recognizing that there are limitations on the amplitude of each term within k). Unlike typical reflectance space spectral reconstruction modeling performed by Wyble, et al. on inkjet systems [11], emissive spectral reproduction demands consideration of absolute radiometric output, especially when accounting for the superposition of the two distinct projector optical paths. A relative shift in the absolute white luminance of one projector versus the other can lead to degraded spectral output quality through the full model. Further, a spectral aim set that demands more flux than the total system is capable of from any single channel likewise limits the optimized performance. k scalars from equation 2 may be derived for any aim spectra set utilizing appropriate constrained nonlinear optimization. For best results, a spectral/colorimetric co-optimization is desirable. The spectral reconstruction system proposed in this work offers 6 distinct primary spectra and is thus capable of infinite combinations of output for achieving standard colorimetric matches to the aim spectra. Several potential techniques are available for this task including 2-stage co-optimization wherein an initial spectral optimization provides k inputs to a colorimetric refinement or matrix-switching approaches focused on optimizing colorimetric processing efficiency for real-time video sequences at the expense of spectral accuracy [15]. Further, full Lagrange multiplier-based spectral/colorimetric co-optimizations that potentially bypass the computational overhead of nonlinear optimization are also proposed in previous work [16]. Experimental To generate 6 superimposed channels of color for spectral reconstruction, twin Panasonic

Proceedings Article
01 Jan 2011
TL;DR: A useful representation of the gamut of an additive display that facilitates efficient numerical computation of thegamut volume is developed and several alternative numerical schemes for gamut volume computations in perceptual spaces are evaluated.
Abstract: Gamut volume computations in perceptual spaces are useful for optimizing designs of color displays. We develop a useful representation of the gamut of an additive display that facilitates efficient numerical computation of the gamut volume. For three primary systems, our representation coincides with the obvious representation of a three-primary additive gamut, while for multiprimary systems, the representation we develop provides a partition of the device gamut as a disjoint union of displaced three primary gamuts thereby facilitating a computation of the overall gamut volume as the sum of these individual three primary gamut volumes. Based on our representation, we develop and evaluate several alternative numerical schemes for gamut volume computations in perceptual spaces, comparing their accuracy and computational requirements.

Proceedings Article
01 Dec 2011
TL;DR: This paper addresses explicitly the problem of color image super-resolution by formulating an optimization problem that leads to convergence guarantees and shows results demonstrating substantial image quality improvement over the state of the art, especially for images with significant chrominance geometry.
Abstract: Image super-resolution is the problem of recovering a high resolution (hi-res) image from multiple low resolution (lo-res) acquisitions of a scene. The main focus and the most significant contributions of research in this area have been on the problem of super-resolving single channel (grayscale) images. Multi-channel (color) image super-resolution is often treated as an extension to grayscale super-resolution by simply considering the luminance component of the image more carefully than the chrominance components. In this paper we address explicitly the problem of color image super-resolution by formulating an optimization problem that leads to convergence guarantees. The key contribution of this work is the inclusion of a color regularizer that effectively accounts for both luminance and chrominance geometry in images. We show results demonstrating substantial image quality improvement over the state of the art, especially for images with significant chrominance geometry. Introduction The resolution of an imager is limited by the resolution of its image sensor and the quality of its optics. In several imaging applications it is useful to have the ability to recover an image with resolution higher than that permitted by the capabilities of the imager. Image super-resolution fills this need by recovering a high resolution (hi-res) image from multiple low resolution (lo-res) acquisitions of a scene provided, of course, that the different low-res images capture different (at sub-pixel level) views of the scene. There are several imaging applications where super-resolution finds use for instance, medical imaging applications that use images for computer vision tasks benefit from high-res images. Another application where super-resolution is particularly appropriate is in surveillance applications where a video stream, which can provide input lo-res frames to the super-resolution algorithm, is continuously acquired. In simple terms, the super-resolution problem is addressed by describing first the several lo-res images on a grid finer than the resolution of single images (an image registration problem) followed by filling in values for missing pixels (an image interpolation problem). There has been significant advance in super-resolution research in recent years. Park et al. [1] give an overview of the problem and describe early advances. Most performance improvements come from solutions to the image registration problem with better motion estimation techniques. A common thread in most work is the focus on grayscale image super-resolution. Color image super-resolution is often treated simply by assuming that the luminance component of the image carries its spatial features. Algorithms that consider the chrominance components will only use them to improve image registration by better motion estimation [2, 3]. Very few researchers consider explicitly the relationship between the color channels in the interpolation problem. When they do, a common approach is to assume that spatial high-frequency components across the color channels are strongly correlated. In other words, if an edge (or a feature) is sensed in one channel, it implies that the edge (or feature) exists in all channels. Farsiu et al. [4] use this approach in a joint demosaicking and super-resolution problem formulation with good results. We note that the assumption about strong interchannel correlation in high frequency components is akin to assuming that most spatial features (edges and texture) appear in some luminance-type component found either with decomposition to a standard luminance-chrominance space like YCbCr, or with the PCA technique for decorrelating the color components. This assumption is clearly untrue for images with strong chrominance geometry images in which edges and textures are not a result of ambient illumination but due to edges between objects with different chrominance. In this work we consider explicitly the problem of color image super-resolution. The key contribution of this work is the inclusion of a color regularizer that effectively accounts for both luminance and chrominance geometry in images. We propose an optimization framework that is separably convex, leading to convergence guarantees, along with the enforcement of constraints consistent with real-world imaging physics. We show results demonstrating substantial image quality improvement over the state of the art, especially for images with significant chrominance edge features. Image-adaptive color super-resolution We first present the mathematical formulation of our color SR framework. We use the camera imaging model [1]: yk =DBTkx+nk, 1≤ k≤ K , (1) where x= [xr x T g x T b ] T ∈ 3n is the unknown (vectorized) hires image that we seek to reconstruct (subscripts r, g, b indicate the red, green and blue color channels respectively), yk ∈ m represents the k-th observed lo-res image, Tk ∈ 3n×3n is the k-th geometric warping matrix, B ∈ 3n×3n describes camera optical blur, downsampling matrix D∈ m×3n models the aliasing, and nk ∈ m is the noise vector that corrupts yk. Single-channel super-resolution The standard SR reconstruction problem recovers an estimate of x by minimizing the error between the warped, blurred and downsampled versions of x as predicted by the imaging

Proceedings Article
07 Nov 2011
TL;DR: It is demonstrated that the observer categories, determined based on individual differences in cone spectral sensitivities (and thus color matching functions), have an influence on the prediction of average suprathreshold color difference perception for a given observer population.
Abstract: In this paper we investigate the impact of colorimetric observer categories on the prediction of the average suprathreshold color difference perception. The observer categories were obtained from an observer classification experiment, while the color difference data were obtained from an experiment involving a liquid crystal display (LCD) with fluorescent backlight. The same observer panel with normal color vision participated in both experiments. Results obtained from the observer classification experiment were consistent with the average observer threshold for color difference judgment. This analysis demonstrates that the observer categories, determined based on individual differences in cone spectral sensitivities (and thus color matching functions), have an influence on the prediction of average suprathreshold color difference perception for a given observer population.

Proceedings Article
01 Nov 2011
TL;DR: A flexible image-difference framework that models these mechanisms using an empirical data-mining strategy to provide a reasonable median prediction of human judgments for only a few selected distortions.
Abstract: An accurate image-difference measure would greatly simplify the optimization of imaging systems and image processing algorithms. The prediction performance of existing methods is limited because the visual mechanisms responsible for assessing image differences are not well understood. This applies especially to the cortical processing of complex visual stimuli. We propose a flexible image-difference framework that models these mechanisms using an empirical data-mining strategy. A pair of input images is first normalized to specific viewing conditions by an image appearance model. Various image-difference features (IDFs) are then extracted from the images. These features represent assumptions about visual mechanisms that are responsible for judging image differences. Several IDFs are combined in a blending step to optimize the correlation between image-difference predictions and corresponding human assessments. We tested our method on the Tampere Image Database 2008, where it showed good correlation with subjective judgments. Comparisons with other image-difference measures were also performed. Introduction An image difference-measure (IDM) that accurately predicts human judgments is the Holy Grail of perception-based image processing. An IDM takes two images and parameters that specify the viewing conditions (e.g., viewing distance, illuminant, and luminance level). It returns a prediction of the perceived difference between the images under the specified viewing conditions. An accurate IDM could supersede tedious psychophysical experiments that are required to optimize imaging systems and image processing algorithms. In the past decades many attempts were made to create increasingly sophisticated IDMs. Unfortunately, evaluations show that they cannot replace human judgments for a wide range of distortions and arbitrary images so far [1, 2]. How an observer perceives a distortion depends on his interpretation of the image content — for example, changing a person’s skin color is likely to cause a larger perceived difference than changing the color of a wall by the same amount. It is therefore improbable that IDMs will perfectly predict human perception before the cortical visual processing is comprehensively understood. However, IDMs could provide a reasonable median prediction of human judgments for only a few selected distortions, e.g., lossy compression or gamut mapping. The Role of Image Appearance Models Many IDMs use image appearance models such as S-CIELAB [3], Pattanaik’s multiscale model [4], or iCAM [5, 6] to transform the input images into an opponent color space defined for specific viewing conditions (e.g., 10 observer, illuminant D65, and average viewing distance). This can be seen as a normalization of the images to the given viewing conditions. Advanced models also consider various appearance phenomena to adjust pixel values to human perception. Typically, they account for spatial properties of the visual system by convolving the images with the chromatic and achromatic contrast sensitivity functions. This allows a meaningful pixelwise comparison of, e.g., halftone and continuous-tone images. For instance, S-CIELAB has been used as an IDM [7] in combination with the CIEDE2000 [8] color-difference formula. Note that image appearance models are still an active research area and have room for improvement. Ideally, they normalize an input image to specific viewing conditions and remove imperceptible content. The result is an image in an opponent color space from which color attributes (lightness, chroma, and hue) can be obtained for each pixel. This space is referred to as the working color space in the following. The Role of the Color Space It is advantageous for image-difference analysis if the working color space is highly perceptually uniform, meaning that Euclidean distances correlate well with perceived color differences. Note that a color space cannot be perfectly perceptually uniform because of geometrical issues and the effect of diminishing returns in color-difference perception [9]. In addition, color-difference data is obtained using color patches instead of complex visual stimuli. Nevertheless, image gradients and edges require perceptually meaningful normalization, i.e., their perceptual magnitudes should be reflected by the corresponding values as closely as possible. Analyzing such image features in a highly non-uniform color space may cause an overor underestimation of their perceptual significance. Image-Difference Features Many IDMs create image-difference maps showing perceived pixel deviations between two input images. For image-difference evaluation, these maps are transformed into a single characteristic value, such as the mean or the 95th percentile. However, psychophysical experiments show that the degree of difference visibility is not well correlated with perceived overall image difference [10]. For example, global intensity changes are generally less objectionable than compression artifacts [10]. It is therefore likely that the prediction performance of IDMs that only operate on image-difference maps can be improved. Our approach uses hypotheses of perceptually significant image differences. We call these hypotheses image-difference features (IDFs). Various examples can be found in the literature [10, 11, 12]. Fig. 1 outlines the normalization and feature-extraction steps of our proposed image-difference framework. We assess the relevance of our IDFs using data that relate image distortions (e.g., noise, lossy compression) to perceived image differences. A vector of IDFs is computed for each image pair (reference image and distorted image). This allows us to determine the 19th Color and Imaging Conference Final Program and Proceedings 23 correlations of individual IDFs with the perceived differences of the image pairs, which are expressed by mean opinion scores (MOS). Image Appearance Model Viewing conditions CIEXYZ color space Compute image-di!erence features (IDFs) based on hypotheses of perceptually signi"cant image di!erences Hypothesis