scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 2020"


Proceedings ArticleDOI
15 Oct 2020
TL;DR: A Denial of Service (DoS) attack that can hinder the functionality of a smart farm by disrupting deployed on-field sensors is demonstrated and a Wi-Fi deauthentication attack that exploits IEEE 802.11 vulnerabilities is discussed, where the management frames are not encrypted.
Abstract: Smart farming also known as precision agriculture is gaining more traction for its promising potential to fulfill increasing global food demand and supply. In a smart farm, technologies and connected devices are used in a variety of ways, from finding the real-time status of crops and soil moisture content to deploying drones to assist with tasks such as applying pesticide spray. However, the use of heterogeneous internet-connected devices has introduced numerous vulnerabilities within the smart farm ecosystem. Attackers can exploit these vulnerabilities to remotely control and disrupt data flowing from/to on-field sensors and autonomous vehicles like smart tractors and drones. This can cause devastating consequences especially during a high-risk time, such as harvesting, where live-monitoring is critical. In this paper, we demonstrate a Denial of Service (DoS) attack that can hinder the functionality of a smart farm by disrupting deployed on-field sensors. In particular, we discuss a Wi-Fi deauthentication attack that exploits IEEE 802.11 vulnerabilities, where the management frames are not encrypted. A MakerFocus ESP8266 Development Board WiFiDeauther Monster is used to detach the connected Raspberry Pi from the network and prevent sensor data from being sent to the remote cloud. Additionally, this attack was expanded to include the entire network, obstructing all devices from connecting to the network. To this end, we urge practitioners to be aware of current vulnerabilities when deploying smart farming ecosystems and encourage the cyber-security community to further investigate the domain-specific characteristics of smart farming.

56 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, the authors identify 10 key research topics and discuss the research problems and opportunities within these topics, as well as discuss the challenges and opportunities in these topics. But, they do not discuss the potential of these issues to be addressed.
Abstract: Since the term first coined in 1999 by Kevin Ashton, the Internet of Things (IoT) has gained significant momentum as a technology to connect physical objects to the Internet and to facilitate machine-to-human and machine-to-machine communications. Over the past two decades, IoT has been an active area of research and development endeavors by many technical and commercial communities. Yet, IoT technology is still not mature and many issues need to be addressed. In this paper, we identify 10 key research topics and discuss the research problems and opportunities within these topics.

19 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, the authors proposed a computational storage system called HydraSpace with multi-layered storage architecture and practical compression algorithms to manage the sensor pipe data, and discussed five open questions related to the challenge of storage design for autonomous vehicles.
Abstract: To ensure the safety and reliability of an autonomous driving system, multiple sensors have been installed in various positions around the vehicle to eliminate any blind point which could bring potential risks. Although the sensor data is quite useful for localization and perception, the high volume of these data becomes a burden for on-board computing systems. More importantly, the situation will worsen with the demand for increased precision and reduced response time of self-driving applications. Therefore, how to manage this massive amount of sensed data has become a big challenge. The existing vehicle data logging system cannot handle sensor data because both the data type and the amount far exceed its processing capability. In this paper, we propose a computational storage system called HydraSpace with multi-layered storage architecture and practical compression algorithms to manage the sensor pipe data, and we discuss five open questions related to the challenge of storage design for autonomous vehicles. According to the experimental results, the total reduction of storage space is achieved by 88.6% while maintaining the comparable performance of the self-driving applications.

12 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: A self-sustained ecosystem for energy sharing in the IoT environment is proposed in this article, where the authors leverage energy harvesting, wireless power transfer, and crowdsourcing that facilitate the development of an energy crowdsharing framework to charge IoT devices.
Abstract: We propose a novel self-sustained ecosystem for energy sharing in the IoT environment. We leverage energy harvesting, wireless power transfer, and crowdsourcing that facilitate the development of an energy crowdsharing framework to charge IoT devices. The ubiquity of IoT devices coupled with the potential ability for sharing energy provides new and exciting opportunities to crowdsource wireless energy, thus enabling a green alternative for powering IoT devices anytime and anywhere. We discuss the crowdsharing of energy services, open challenges, and proposed solutions.

11 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, the authors evaluate semantic healthcare procedure code embeddings on a Medicare fraud classification problem using publicly available big data, and train Word2Vec models on sequences of co-occurring codes from the Healthcare Common Procedure Coding System (HCPCS).
Abstract: This study evaluates semantic healthcare procedure code embeddings on a Medicare fraud classification problem using publicly available big data. Traditionally, categorical Medicare features are one-hot encoded for the purpose of supervised learning. One-hot encoding thousands of unique procedure codes leads to high-dimensional vectors that increase model complexity and fail to capture the inherent relationships between codes. We address these shortcomings by representing procedure codes using low-rank continuous vectors that capture various dimensions of similarity. We leverage publicly available data from the Centers for Medicare and Medicaid Services, with more than 56 million claims records, and train Word2Vec models on sequences of co-occurring codes from the Healthcare Common Procedure Coding System (HCPCS). Continuous-bag-of-words and skip-gram embed-dings are trained using a range of embedding and window sizes. The proposed embeddings are empirically evaluated on a Medicare fraud classification problem using the Extreme Gradient Boosting learner. Results are compared to both one-hot encodings and pre-trained embeddings from related works using the area under the receiver operating characteristic curve and geometric mean metrics. Statistical tests are used to show that the proposed embeddings significantly outperform one-hot encodings with 95% confidence. In addition to our empirical analysis, we briefly evaluate the quality of the learned embeddings by exploring nearest neighbors in vector space. To the best of our knowledge, this is the first study to train and evaluate HCPCS procedure embeddings on big Medicare data.

9 citations


Journal ArticleDOI
04 Nov 2020
TL;DR: This paper presents a method for jointly estimating the spectral reflectance and the SPD of each projector primary using a low-dimensional model using basis functions obtained by a newly collected projector's SPD database.
Abstract: A lighting-based multispectral imaging system using an RGB camera and a projector is one of the most practical and low-cost systems to acquire multispectral observations for estimating the scene's spectral reflectance information. However, existing projector-based systems assume that the spectral power distribution (SPD) of each projector primary is known, which requires additional equipment such as a spectrometer to measure the SPD. In this paper, we present a method for jointly estimating the spectral reflectance and the SPD of each projector primary. In addition to adopting a common spectral reflectance basis model, we model the projector's SPD by a low-dimensional model using basis functions obtained by a newly collected projector's SPD database. Then, the spectral reflectances and the projector's SPDs are alternatively estimated based on the basis models. We experimentally show the performance of our joint estimation using a different number of projected illuminations and investigate the potential of the spectral reflectance estimation using a projector with unknown SPD.

9 citations


Journal ArticleDOI
04 Nov 2020
TL;DR: The intention is to create practical models, which can well explain the detection performance for natural viewing in a wide range of conditions, and can find applications in modeling visual performance for high dynamic range and augmented reality displays.
Abstract: We model color contrast sensitivity for Gabor patches as a function of spatial frequency, luminance and chromacity of the background, modulation direction in the color space and stimulus size. To fit the model parameters, we combine the data from five independent datasets, which let us make predictions for background luminance levels between 0.0002 cd/m2and 10 000 cd/m2, and for spatial frequencies between 0.06 cpd and 32 cpd. The data are well-explained by two models: a model that encodes cone contrast and a model that encodes postreceptoral, opponent-color contrast. Our intention is to create practical models, which can well explain the detection performance for natural viewing in a wide range of conditions. As our models are fitted to the data spanning very large range of luminance, they can find applications in modeling visual performance for high dynamic range and augmented reality displays.

8 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, a security-aware provenance graph-based design for explaining AI-based decision-making is proposed to provide end-users with sufficient meta-information to understand AI decision making.
Abstract: Deriving explanations of an Artificial Intelligence-based system's decision making is becoming increasingly essential to address requirements that meet quality standards and operate in a transparent, comprehensive, understandable, and explainable manner. Furthermore, more security issues as well as concerns from human perspectives emerge in describing the explainability properties of AI. A full system view is required to enable humans to properly estimate risks when dealing with such systems. This paper introduces open issues in this research area to present the overall picture of explainability and the required information needed for the explanation to make a decision-oriented AI system transparent to humans. It illustrates the potential contribution of proper provenance data to AI-based systems by describing a provenance graph-based design. This paper proposes a six-Ws framework to demonstrate how a security-aware provenance graph-based design can build the basis for providing end-users with sufficient meta-information on AI-based decision systems. An example scenario is then presented that highlights the required information for better explainability both from human and security-aware aspects. Finally, associated challenges are discussed to provoke further research and commentary.

7 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, a study of $N$ = 620 participants found that users are aware of frequent security attacks, including phishing, and that risk communication offers the potential to increase MFA adoption.
Abstract: Exposure of passwords for authentication and access management is a ubiquitous and constant threat. Yet, reliable solutions, including multi-factor authentication (MFA), face issues with wide-spread adoption. Prior research shows that making MFA mandatory helps with tool adoption but is detrimental to users' mental models and leads to security-avoidance behavior. To explore feasible solutions, we implemented text-and video-based risk communication strategies to evaluate if either mode of risk communication was useful. We sought to explore users' technical biases to further examine the mental models that are associated with safer security habits. Our study of $N$ = 620 participants found that users are aware of frequent security attacks, including phishing. We found that text- and video-based communication is often useful when information is aligned with individual actions and their consequences, which can range from benign to catastrophic. Shorter mental-model-aligned video snippets piqued user interest in MFA. On the other hand, detailed risk communication videos or textual descriptions improved users' understanding of MFA and the potential risks of non-usage. Our study indicates that, beyond usability and comprehensive education, risk communication offers the potential to increase MFA adoption.

6 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, the authors proposed a privacy-preserving surveillance as an edge service (PriSE) based on a spectrum of image processing, image scrambling, and deep learning based mechanisms.
Abstract: With a myriad of edge cameras deployed in urban areas, many people are seriously concerned about the invasion of their privacy. The edge computing paradigm allows enforcing privacy-preserving measures at the point where the video frames are created. However, the resource constraints at the network edge make existing compute-intensive privacy-preserving solutions unaffordable. In this paper, we propose slenderized and efficient methods for Privacy-preserving Surveillance as an Edge service (PriSE) after investigating a spectrum of image-processing, image scrambling, and deep learning (DL) based mechanisms. At the edge cameras, the PriSE introduces an efficient and lightweight Reversible Chaotic Masking (ReCAM) scheme preceded by a simple foreground object detector. The scrambling scheme prevents an interception attack by ensuring end-to-end privacy. The simplified motion detector helps save bandwidth, processing time, and storage by discarding those frames that contain no foreground objects. On a fog/cloud server, the scrambling scheme is coupled with a robust window-detector to prevent peeping via windows and a multi-tasked convolutional neural network (MTCNN) based face-detector for the purpose of de-identification. The extensive experimental studies and comparative analysis show that the PriSE is able to efficiently detect foreground objects and scramble frames at the edge cameras, and detect and denature window and face objects at a fog/cloud server to ensure end-to-end communication privacy and anonymity, respectively. This is done just before the frames are sent to the viewing stations.

6 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: A survey of 131 clinical audiologists found that only 9.9% reported at least one data breach in 2019, significantly less than the average for small businesses and health care providers, and only 24.4% reported having cyber insurance as mentioned in this paper.
Abstract: Despite well-documented cyber threats to patients' protected health information (PHI), sparse evidence exists about the state of cybersecurity behavior of health care workers and medical private practices. There is evidence of insecure behavior in hospital settings, even though specific insights about private practice are still absent. In addition to mandatory standards for securing PHI, such as the Health Insurance Portability & Accountability Act (HIPAA), small business viability and their patients' security and privacy are critically dependent upon technology availability and reliability. In this survey of 131 clinical audiologists we show that many lack time, staff expertise, or funds to deploy adequate cybersecurity that prevents and mitigates threats to security and privacy. We find widespread deployment of HIPAA-compliant cybersecurity, including antivirus software and individual logins. Only 9.9% of participants reported at least one data breach in 2019, significantly less than the average for small businesses and health care providers, and only 24.4% reported having cyber insurance. Practice owners view patient data as well protected and unlikely victims for cyber attacks and breaches. These results have important implications for cybersecurity products and services, and to medical professionals who must acknowledge the acute importance of cybersecurity in securing protected health information and mitigating risks. Small business private practice health care providers who are particularly sensitive to the impacts of cyber attacks and must prioritize and adopt countermeasures that decrease the risks to patients and their own businesses.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, the authors discuss the main features that make the Internet of Things (IoT) a unique ecosystem and as such, calls for new software development solutions, and claim that there is a need to rethinking the techniques and methodologies for developing IoT systems and applications.
Abstract: The Internet of Things (IoT) represents the next significant step in the evolution of the Internet. It will allow “things” to be connected anytime, anywhere, with anything and anyone, providing a myriad of novel applications and augmented services to citizens, governments, and enterprises. We believe that for the IoT to reach its full potential, it will be necessary to advance the investigation of techniques and technologies to build systems in such a brand-new scenario. In the IoT, the collaborating entities encompass both physical and virtual resources, interactions occur both in an active and programmed way as well as by chance, and therefore it is necessary to deal with expected but also emerging behaviors of the collaborating parties. In this paper, we first discuss the main features that make IoT a unique ecosystem, and as such, calls for new software development solutions. We claim that there is a need to rethinking the techniques and methodologies for developing IoT systems and applications. Novel models, architectural approaches and techniques should be proposed, or existing ones should be adapted to deal with the high heterogeneity, dynamism, serendipity and interdependencies that are typical of the IoT ecosystem. We then analyze and discuss potential key design solutions that deserve a deeper understanding in order to pave the way for the building of this new generation of systems. Our discussion is presented from a bottom-up perspective: from the modeling of devices that make up the $\mathrm{IoT}$ to the representation of requirements that must be addressed when engineering IoT systems.

Journal ArticleDOI
04 Nov 2020
TL;DR: It is found that color distortions due to secondary illumination from chromatic furnishing materials led to systematic and significant color shifts, and major differences between the lamp-specified color rendition and temperature and the actual light-based " effective color rendering" and "effective color temperature".
Abstract: In complex scenes, the light reflected by surfaces causes secondary illumination, which contributes significantly to the actual light in the space (the "light field"). Secondary illumination is dependent on the primary illumination, geometry, and materials of a space. Hence, primary illumination and secondary illumination can have non-identical spectral properties, and render object colors differently. Lighting technology and research predominantly relies on the color rendering properties of the illuminant. Little attention has been given to the impact of secondary illumination on the "effective color rendering" within light fields. Here we measure the primary and secondary illumination for a simple spatial geometry and demonstrate empirically their differential "effective color rendering" properties. We found that color distortions due to secondary illumination from chromatic furnishing materials led to systematic and significant color shifts, and major differences between the lamp-specified color rendition and temperature and the actual light-based "effective color rendering" and "effective color temperature". On the basis of these results we propose a methodological switch from assessing the color rendering and temperature of illuminants only to assessing the "effective color rendering and temperature" in context too.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, a survey of LSTM models for automated de-identification of medical free text is presented, with outstanding results obtained from several studies that incorporate long short-term memory (LSTM) networks.
Abstract: The confidentiality of patient information is legislated by governmental regulations in various countries, such as the Health Insurance Portability and Accountability Act (HIPAA) standards in the USA. Under these laws, adequate protections must be in place to safeguard patients' health records, which are often big data comprised of free text. Machine learning approaches are extensively used for the automated de-identification of medical free text, with outstanding results obtained from several studies that incorporate long short-term memory (LSTM) networks. These networks are a variant of the recurrent neural network (RNN) architecture. Our survey of LSTM models dates back five years, and the contribution of the findings is appreciable. Performance-wise, LSTMs generally surpassed other types of models used in automated de-identification of free text, namely conditional random field (CRF) algorithms and rule-based algorithms. In addition, hybrid or ensemble LSTM models did not outperform LSTM -only models. Finally, we note that the customization of gold-standard, de-identification datasets may result in overfitted models.

Journal ArticleDOI
04 Nov 2020
TL;DR: In this article, the authors investigated the reason why observer metamers of white are usually pinkish or greenish and found that most observers who participated in a visual demonstration reported that white observer metamerers appear pinkish and greenish but rarely yellowish or bluish.
Abstract: White lighting and neutral-appearing objects are essential in numerous color applications. In particular, setting or tuning a reference white point is a key procedure in both camera and display applications. Various studies on observer metamerism pointed out that noticeable color disagreements between observers mainly appear in neutral colors. Thus, it is vital to understand how observer metamers of white (or neutral) appear in different colors by different observers. Most observers who participated in a visual demonstration reported that white observer metamers appear pinkish or greenish but rarely yellowish or bluish. In this paper, this intriguing question, "Why observer metamers of white are usually pinkish or greenish?," is addressed based on simulations. Besides, it is also analyzed that which physiological factors play an essential role in this phenomenon and why it is less likely for humans to perceive yellowish or bluish observer metamers of white.

Journal ArticleDOI
01 Nov 2020
TL;DR: It is shown that by using colour correction and colour blending, this work can automate the pain-staking colour editing task and save time for consumer colour preference researchers.
Abstract: We present a simple primary colour editing method for consumer product images. We show that by using colour correction and colour blending, we can automate the pain-staking colour editing task and save time for consumer colour preference researchers. To improve the colour harmony between the primary colour and its complementary colours, our algorithm also tunes the other colours in the image. A preliminary experiment has shown some promising results compared with a state-of-the-art method and human editing.

Journal ArticleDOI
04 Nov 2020
TL;DR: The proposed method involves using a drone to carry a grey ball of known percent surface spectral reflectance throughout a scene while photographing it frequently during the flight using a calibrated camera, which provides a measure of the illumination colour at that location.
Abstract: For research in the field of illumination estimation and colour constancy there is a need for ground truth measurement of the illumination colour at many locations within multi-illuminant scenes. A practical approach to obtaining such ground truth illumination data is presented here. The proposed method involves using a drone to carry a grey ball of known percent surface spectral reflectance throughout a scene while photographing it frequently during the flight using a calibrated camera. The captured images are then post-processed. In the post-processing step, machine vision techniques are used to detect the grey ball within each frame. The camera RGB of light reflected from the grey ball provides a measure of the illumination colour at that location. In total, the dataset contains 30 scenes with 100 illumination measurements on average per scene. The dataset is available for download free of charge.

Journal ArticleDOI
04 Nov 2020
TL;DR: It is argued that the Vora-Value is a suitable way to measure subspace similarity and an optimization method for finding a filter that maximizes the VORA-Value measure is developed.
Abstract: The Luther condition states that if the spectral sensitivity responses of a camera are a linear transform from the color matching functions of the human visual system, the camera is colorimetric. Previous work proposed to solve for a filter which, when placed in front of a camera, results in sensitivities that best satisfy the Luther condition. By construction, the prior art solves for a filter for a given set of human visual sensitivities, e.g. the XYZ color matching functions or the cone response functions. However, depending on the target spectral sensitivity set, a different optimal filter is found. This paper begins with the observation that the cone fundamentals, XYZ color matching functions or any linear combination thereof span the same 3-dimensional subspace. Thus, we set out to solve for a filter that makes the vector space spanned by the filtered camera sensitivities as similar as possible to the space spanned by human vision sensors. We argue that the Vora-Value is a suitable way to measure subspace similarity and we develop an optimization method for finding a filter that maximizes the Vora-Value measure. Experiments demonstrate that our new optimization leads to filtered camera sensitivities which have a significantly higher Vora-Value compared with antecedent methods.

Journal ArticleDOI
04 Nov 2020
TL;DR: A model that relies on the contrast sensitivity function (CSF) of the visual system, and hence, predicts the visibility of banding artefacts in a perceptually accurate way is developed and validated.
Abstract: Banding is a type of quantisation artefact that appears when a low-texture region of an image is coded with insufficient bitdepth. Banding artefacts are well-studied for standard dynamic range (SDR), but are not well-understood for high dynamic range (HDR). To address this issue, we conducted a psychophysical experiment to characterise how well human observers see banding artefacts across a wide range of luminances (0.1 cd/m2– 10,000 cd/m2). The stimuli were gradients modulated along three colour directions: black-white, red-green, and yellow-violet. The visibility threshold for banding artefacts was the highest at 0.1 cd/m2, decreased with increasing luminance up to 100 cd/m2, then remained at the same level up to 10,000 cd/m2. We used the results to develop and validate a model of banding artefact detection. The model relies on the contrast sensitivity function (CSF) of the visual system, and hence, predicts the visibility of banding artefacts in a perceptually accurate way.

Journal ArticleDOI
04 Nov 2020
TL;DR: A model of the linear sum of the visual and audio adaptation effects on metallic material appearance is created, which found visual and auditory adaptation effects and did not find the cross-modal effects of audiovisual adaptation.
Abstract: In this paper, we investigated the effects of visual and auditory adaptation on material appearance. The target in this study was metallic perception. First, participants evaluated CG images using sounds and other images. In the experiment, we prepared metallic stimulus under various adaptation conditions with different combinations of metal image, non-metal image, metal sound, and non-metal sound stimuli. After these adaptations, the participants answered "metal" or "non-metal" after viewing a displayed reference image. The reference images were generated by interpolating metal and non-metal images. Next, we analyzed the results and clarified the effects of visual, auditory, and audiovisual adaptations on the metallic perception. For analyzing results, we used a logistic regression analysis based on Bayesian statistics. From the analysis results, we found visual and auditory adaptation effects. On the other hand, we did not find the cross-modal effects of audiovisual adaptation. Finally, we created a model of the linear sum of the visual and audio adaptation effects on metallic material appearance.

Proceedings ArticleDOI
02 Dec 2020
TL;DR: In this paper, the authors propose proactive Digital Companions that take advantage of the new generation of pervasive hypermedia environments to provide assistance and protection to people, by perceiving a person's environment through vision and sound.
Abstract: Artificial companions and digital assistants have been investigated for several decades, from research in the autonomous agents and social robots areas to the highly popular voice-enabled digital assistants that are already in widespread use (e.g., Siri and Alexa). Although these companions provide valuable information and services to people, they remain reactive entities that operate in isolated environments waiting to be asked for help. The Web is now emerging as a uniform hypermedia fabric that interconnects everything (e.g., devices, physical objects, abstract concepts, digital services), thereby enabling unprecedented levels of automation and comfort in our professional and private lives. However, this also results in increasingly complex environments that are becoming unintelligible to everyday users. To ameliorate this situation, we envision proactive Digital Companions that take advantage of this new generation of pervasive hypermedia environments to provide assistance and protection to people. In addition to Digital Companions perceiving a person's environment through vision and sound, pervasive hypermedia environments provide them with means to further contextualize the situation by exploiting information from available connected devices, and give them access to rich knowledge bases that allow to derive relevant actions and recommendations.

Journal ArticleDOI
04 Nov 2020
TL;DR: A simple modification of the von Kries chromatic adaptation transform is introduced, referred to as vK20, that can account for the asymmetry in Chromatic adaptation through inclusion of previous adapting conditions.
Abstract: Recent data has shown that the process of chromatic adaptation might be asymmetrical, or irreversible, and that this effect might be more than simply a manifestation of the time course of adaptation. This paper introduces a simple modification of the von Kries chromatic adaptation transform, referred to as vK20, that can account for the asymmetry in chromatic adaptation through inclusion of previous adapting conditions. Also introduced is a new reference chromaticity (∼15000K) for degree of adaptation that seems more physiologically plausible than the commonly used equal-energy (EE) illuminant or CIE illuminant D65.

Journal ArticleDOI
04 Nov 2020
TL;DR: A robust metric, observer metamerism magnitude (OMM) is introduced, which quantifies the OM of paired displays, depending on the similarity in spectral bandwidth between them, and the effect of changes in peak luminance on OM was found to be small.
Abstract: Observer metamerism (OM) is one of the potential issues in HDR displays because of the required wide color gamuts and high peak luminance levels. A simulation was performed using hypothetical displays to investigate how OM in HDR displays would vary with changes in color gamuts and peak luminance levels. In this work, a robust metric, observer metamerism magnitude (OMM) is introduced, which quantifies the OM of paired displays, depending on the similarity in spectral bandwidth between them. Also, the effect of changes in peak luminance on OM was found to be small, increasing OMM by 7 ∼ 8 % when peak luminance doubles.

Journal ArticleDOI
04 Nov 2020
TL;DR: The results revealed that the chromatic CSF under the present experimental conditions having many lower spatial frequencies covering five colour centres to be band pass, whereas previous results indicated it was low pass, however, this could be caused by the present Experimental conditions such as fixed-size stimuli and constant luminance.
Abstract: The goal of this research is to generate high quality chromatic Contrast Sensitivity Function (CSF) over a wide range of spatial frequencies from 0.06 to 3.84 cycles per degree (cpd) surrounding 5 CIE proposed colour centres (white, red, yellow, green and blue) to study colour difference. At each centre, 6 colour directions at each of 7 frequencies were sampled, from 0.06 to 3.84 cycles per degree (cpd) corresponding to the number of cycles: from 2.3 to 144.4 respectively. A threshold method based on forced-choice stair-case was adopted to investigate the just noticeable (threshold) colour difference. The results revealed that the chromatic CSF under the present experimental conditions having many lower spatial frequencies covering five colour centres to be band pass, whereas previous results indicated it was low pass. However, this could be caused by the present experimental conditions such as fixed-size stimuli and constant luminance. The new chromatic CSF for R-G and Y-B channels were also developed. Introduction The human visual system has different sensitivity to contrast patterns at different spatial frequencies. The function to describe this dependence for simple sinusoidal patterns is called contrast sensitivity function (CSF). The CSF for luminance patterns has been studied extensively and robust models are established. Barten [1] developed two models: one that is a physiologically inspired complex model and the other that is relatively simple and empirically fitted to psychophysical data as given in equation (1). csf(f) = afe!"#)1 + ce"#, $.& , (1)

Journal ArticleDOI
01 Nov 2020
TL;DR: In this article, the authors investigated spatio-chromatic contrast sensitivity in both younger and older color-normal observers and tested how the adapting light level affected the contrast sensitivity and whether there was a differential age-related change in sensitivity.
Abstract: We investigated spatio-chromatic contrast sensitivity in both younger and older color-normal observers. We tested how the adapting light level affected the contrast sensitivity and whether there was a differential age-related change in sensitivity. Contrast sensitivity was measured along three directions in colour space (achromatic, red-green, yellowish-violet), at background luminance levels from 0.02 to 2000 cd/m2, and different stimuli sizes using 4AFC method on a high dynamic range display. 20 observers with a mean age of 33 y. o. a. and 20 older observers with mean age of 65 participated in the study. Within each session, observers were fully adapted to the fixed background luminance. Our main findings are: (1) Contrast sensitivity increases with background luminance up to around 200 cd/m2, then either declines in case of achromatic contrast sensitivity, or remains constant in case of chromatic contrast sensitivity; (2) The sensitivity of the younger age group is higher than that for the older age group by 0.3 log units on average. Only for the achromatic contrast sensitivity, the old age group shows a relatively larger decline in sensitivity for medium to high spatial frequencies at high photopic light levels; (3) Peak frequency, peak sensitivity and cut-off frequency of contrast sensitivity functions show decreasing trends with age and the rate of this decrease is dependent on mean luminance. The data is being modeled to predict contrast sensitivity as a function of age, luminance level, spatial frequency, and stimulus size.

Journal ArticleDOI
01 Nov 2020
TL;DR: Weibull Tone Mapping (WTM) as mentioned in this paper is an automated tone mapping method based on the Weibull distribution of the brightness distributions of the image and the tone curve.
Abstract: Imagery is a preferred tool for environmental surveys within marine environments, particularly in deeper waters, as it is nondestructive compared to traditional sampling methods. However, underwater illumination effects limit its use by causing extremely varied and inconsistent image quality. Therefore, it is often necessary to pre-process images to improve visibility of image features and textures, and standardize their appearance. Tone mapping is a simple and effective technique to improve contrast and manipulate the brightness distributions of images. Ideally, such tone mapping would be automated, however we found that existing techniques are inferior when compared to custom manipulations by image annotators (biologists). Our own work begins with the observation that these userdefined tonal manipulations are quite variable, though on average, are fairly smooth, gentle waving operations. To predict user-defined tone maps we found it sufficed to approximate the brightness distributions of input and user adjusted images by Weibull distributions and then solve for the tone curve which matched these distributions from input to output. Experiments demonstrate that our Weibull Tone Mapping (WTM) method is strongly preferred over traditional automated tone mappers and weakly preferred over the users' own tonal adjustments.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, the authors propose ParaSDN, an access control model to address the above problem using the concept of parameterized roles and permissions, which provides the benefits of enhancing access control granularity for SDN with support of role and permission parameters.
Abstract: Software Defined Networking (SDN) has become one of the most important network architectures for simplifying network management and enabling innovation through network programmability. Network applications submit network operations that directly and dynamically access critical network resources and manipulate the network behavior. Therefore, validating these operations submitted by SDN applications is critical for the security of SDNs. A feasible access control mechanism should allow system administrators to specify constraints that allow for applying minimum privileges on applications with high granularity. However, the granularity of access provided by current access control systems for SDN applications is not sufficient to satisfy such requirements. In this paper, we propose ParaSDN, an access control model to address the above problem using the concept of parameterized roles and permissions. Our model provides the benefits of enhancing access control granularity for SDN with support of role and permission parameters. We implemented a proof of concept prototype in an SDN controller to demonstrate the applicability and feasibility of our proposed model in identifying and rejecting unauthorized access requests submitted by controller applications.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, the authors proposed a two-phase sustainability-aware resource allocation and management framework for data center life-cycle management that jointly optimizes the data center manufacturing phase and operational phase impact without impacting the performance and service quality for the jobs.
Abstract: In the big data era, cloud computing provides an effective usage model for providing computing services to handle diverse data-intensive workloads. Data center capacity planning and resource provisioning policies play a vital role in longterm life cycle management of datacenters. Effective design and management of data center infrastructures while ensuring good performance is critical to minimizing the carbon footprint of the datacenter. Traditional solutions have primarily focused on optimizing data center operational phase impacts including reducing energy cost during the resource management phase. In this paper, we propose a two-phase sustainability-aware resource allocation and management framework for data center life-cycle management that jointly optimizes the data center manufacturing phase and operational phase impact without impacting the performance and service quality for the jobs. Phase 1 of the proposed approach minimizes data center building phase carbon footprint through a novel manufacturing cost-aware server provisioning plan. In phase 2, the approach minimizes the operational phase carbon footprint using a server lifetime-aware resource allocation scheme and a manufacturing cost-aware replacement plan. The proposed techniques are evaluated through extensive experiments using realistic workloads generated in a data center. The evaluation results show that the proposed framework significantly reduces the carbon footprint in the data center without impacting the performance of the jobs in the workload.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, the authors propose a model without any a priori selection of meta-paths, which utilizes locally-sampled (heterogeneous) context graphs centered at a target node in order to extract relevant representational information for that target node.
Abstract: Representation learning for heterogeneous graphs aims at learning meaningful node (or edge) representations to facilitate downstream tasks such as node classification, node clustering, and link prediction. While graph neural networks (GNNs) have recently proven to be effective in representation learning, one of the limitations is that most investigations focus on homogeneous graphs. Existing investigations on heterogeneous graphs often make direct use of meta-path type structures. Meta-path-based approaches often require a priori designation of meta-paths based on heuristic foreknowledge regarding the characteristics of heterogeneous graphs under investigation. In this paper, we propose a model without any a priori selection of meta-paths. We utilize locally-sampled (heterogeneous) context graphs “centered” at a target node in order to extract relevant representational information for that target node. To deal with the heterogeneity in the graph, given the different types of nodes, we use different linear transformations to map the features in different domains into a unified feature space. We use the classical Graph Convolution Network (GCN) model as a tool to aggregate node features and then aggregate the context graph feature vectors to produce the target node's feature representation. We evaluate our model on three real-world datasets. The results show that the proposed model has better performance when compared with four baseline models.

Proceedings ArticleDOI
Semih Sahin1, Ling Liu1, Wenqi Cao1, Qi Zhang1, Juhyun Bae1, Yanzhao Wu1 
01 Dec 2020
TL;DR: In this paper, a suite of memory abstraction and optimization techniques for distributed executors is presented, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark.
Abstract: This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.