scispace - formally typeset
Search or ask a question

Showing papers on "Naturalness published in 2019"


Proceedings Article
24 May 2019
TL;DR: This article revisited these assumptions and provided theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation, and made steps toward a deeper understand of value function approximation.
Abstract: Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods often crucially rely on two types of assumptions: (1) mild distribution shift, and (2) representation conditions that are stronger than realizability. However, the necessity ("why do we need them?") and the naturalness ("when do they hold?") of such assumptions have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation.

220 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined whether perceptions of naturalness in architecture are linked to objective visual patterns, and investigated how natural patterns influence aesthetic evaluations of architectural scenes, finding that natural patterns explained over half of the variance in scene naturalness ratings, while aesthetic preference ratings were found to relate closely to natural patterns in architecture.

68 citations


Journal ArticleDOI
14 Apr 2019
TL;DR: For instance, this article found that perceived naturalness was positively related to the perceived plant and invertebrate biodiversity value, participants' aesthetic appreciation and the self-reported restorative effect of the planting, with women and more nature connected participants perceiving significantly higher levels of naturalness in the planting.
Abstract: 1. The multiple benefits of ‘nature’ for human health and well‐being have been documented at an increasing rate over the past 30 years. A growing body of research also demonstrates the positive well‐being benefits of nature‐connectedness. There is, however, a lack of evidence about how people's subjective nature experience relates to deliberately designed and managed urban green infrastructure (GI) with definable ‘objective’ characteristics such as vegetation type, structure and density. Our study addresses this gap. 2. Site users (n = 1411) were invited to walk through woodland, shrub and herbaceous planting at three distinctive levels of planting structure at 31 sites through‐out England, whilst participating in a self‐guided questionnaire survey assessing reactions to aesthetics, perceived plant and invertebrate biodiversity, restorative effect, nature‐connectedness and socio‐demographic characteristics. 3. There was a significant positive relationship between perceived naturalness and planting structure. Perceived naturalness was also positively related to the perceived plant and invertebrate biodiversity value, participants’ aesthetic appreciation and the self‐reported restorative effect of the planting. A negative relationship was recorded between perceived naturalness and perceived tidiness and care. Our findings showed that participants perceived ‘naturalness’ as biodiverse, attractive and restorative, but not necessarily tidy. Perceived naturalness was also related to participants’ educational qualifications, gender and nature‐connectedness, with women and more nature‐connected participants perceiving significantly greater levels of naturalness in the planting. 4. These findings are highly significant for policymakers and built environment professionals throughout the world aiming to design, manage and fund urban GI to achieve positive human health and biodiversity outcomes. This applies particularly under austerity approaches to managing urban green spaces where local authorities have experienced cuts in funding and must prioritise and justify GI maintenance practices and regimes.

55 citations


Journal ArticleDOI
TL;DR: This study investigated the efficacy of different messages designed to address consumers' concerns about clean meat naturalness, and indicated that arguing that conventional meat is unnatural resulted in a significant increase in some measures of acceptance compared to other messages.

54 citations


Journal ArticleDOI
TL;DR: ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions were examined, which suggests that face naturalness and emotion are decoded in parallel at these early stages.
Abstract: In neuroscientific studies, the naturalness of face presentation differs; a third of published studies makes use of close-up full coloured faces, a third uses close-up grey-scaled faces and another third employs cutout grey-scaled faces. Whether and how these methodological choices affect emotion-sensitive components of the event-related brain potentials (ERPs) is yet unclear. Therefore, this pre-registered study examined ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions, while attention was directed to no-face oddballs. Results revealed no interaction of face naturalness and emotion for any ERP component, but showed, however, large main effects for both factors. Specifically, fearful faces and decreasing face naturalness elicited substantially enlarged N170 and early posterior negativity amplitudes and lower face naturalness also resulted in a larger P1.This pattern reversed for the LPP, showing linear increases in LPP amplitudes with increasing naturalness. We observed no interaction of emotion with face naturalness, which suggests that face naturalness and emotion are decoded in parallel at these early stages. Researchers interested in strong modulations of early components should make use of cutout grey-scaled faces, while those interested in a pronounced late positivity should use close-up coloured faces.

37 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the minimal Type-III seesaw model to explain the origin of the nonzero neutrino masses and mixing and show that the naturalness arguments and the bounds from lepton flavor violating decay (\(\mu \rightarrow e \gamma \)) provide very stringent bounds on the model along with the constraints on the stability of the electroweak vacuum up to high energy scale.
Abstract: We study the minimal Type-III seesaw model to explain the origin of the non-zero neutrino masses and mixing. We show that the naturalness arguments and the bounds from lepton flavor violating decay (\(\mu \rightarrow e \gamma \)) provide very stringent bounds on the model along with the constraints on the stability of the electroweak vacuum up to high energy scale. We perform a detailed analysis of the model parameter space including all the constraints for both normal and inverted hierarchies of the light neutrino masses. We find that most of the regions that are allowed by naturalness and lepton flavor violating decay fall into the metastable region.

28 citations


Journal ArticleDOI
TL;DR: The FNI has the potential to become a valuable tool in the process of reformulating existing products, developing new ones, and understanding, tracking, and communicating food naturalness attributes in the marketplace and may provide an objective basis for the use of the “natural” label on food products.
Abstract: Background Consumers are increasingly demanding transparency in food labeling as they want more and better information about what they are eating and where their food comes from. This seems to be particularly the case for food naturalness. Several food indexes or metrics have been developed in the last decades to objectively measure various aspects of food, yet a comprehensive index that quantifies the naturalness of foods is still missing. Scope and approach In the absence of clear rules to define and measure food naturalness, this article describes the development of the Food Naturalness Index (FNI), which aims to accurately measure the degree of food naturalness. The FNI simultaneously integrates and builds on insights from consumer research, legal and technical perspectives. A preliminary assessment of the index with consumers across a wide variety of products was conducted. Key Findings and Conclusions: The FNI proposed herein is comprised of four component measures, namely farming practices, free from additives, free from unexpected ingredients, and degree of processing, which includes 10 relevant food naturalness attributes that can be consistently evaluated from information on the product label. The FNI scores were highly correlated with consumers’ perceptions of food naturalness. The FNI has the potential to become a valuable tool in the process of reformulating existing products, developing new ones, and understanding, tracking, and communicating food naturalness attributes in the marketplace. Furthermore, the FNI may provide an objective basis for the use of the “natural” label on food products, which can ultimately lead to better-informed choices.

26 citations


Journal ArticleDOI
03 Sep 2019
TL;DR: In this article, the notion of stringy naturalness and how it lifts the Higgs boson mass to the 125 GeV range while also lifting sparticle masses beyond LHC search limits was studied.
Abstract: This paper studies the notion of ``Stringy naturalness'' and how it lifts the Higgs boson mass to the 125 GeV range whilst also lifting sparticle masses beyond LHC search limits. As a consequence, SUSY may be revealed at high luminosity LHC via the presence of light higgsinos of mass ~100-300 GeV while a higher energy LHC upgrade may be required to access gluinos and top squarks.

22 citations


Journal ArticleDOI
01 Sep 2019-Appetite
TL;DR: It is concluded that the scales can be used interchangeably and it is recommended using the shortest, and therefore, most efficient measure for the importance of naturalness.

22 citations


Posted Content
TL;DR: This work extends a previous GAN-based speech enhancement system to deal with mixtures of four types of aggressive distortions, and proposes the addition of an adversarial acoustic regression loss that promotes a richer feature extraction at the discriminator.
Abstract: The speech enhancement task usually consists of removing additive noise or reverberation that partially mask spoken utterances, affecting their intelligibility. However, little attention is drawn to other, perhaps more aggressive signal distortions like clipping, chunk elimination, or frequency-band removal. Such distortions can have a large impact not only on intelligibility, but also on naturalness or even speaker identity, and require of careful signal reconstruction. In this work, we give full consideration to this generalized speech enhancement task, and show it can be tackled with a time-domain generative adversarial network (GAN). In particular, we extend a previous GAN-based speech enhancement system to deal with mixtures of four types of aggressive distortions. Firstly, we propose the addition of an adversarial acoustic regression loss that promotes a richer feature extraction at the discriminator. Secondly, we also make use of a two-step adversarial training schedule, acting as a warm up-and-fine-tune sequence. Both objective and subjective evaluations show that these two additions bring improved speech reconstructions that better match the original speaker identity and naturalness.

22 citations


Proceedings ArticleDOI
20 Sep 2019
TL;DR: The authors use variational autoencoders (VAEs) which explicitly place the most "average" data close to the mean of the Gaussian prior and propose that by moving towards the tails of the prior distribution, the model will transition towards generating more idiosyncratic, varied renditions.
Abstract: Unlike human speakers, typical text-to-speech (TTS) systems are unable to produce multiple distinct renditions of a given sentence. This has previously been addressed by adding explicit external control. In contrast, generative models are able to capture a distribution over multiple renditions and thus produce varied renditions using sampling. Typical neural TTS models learn the average of the data because they minimise mean squared error. In the context of prosody, taking the average produces flatter, more boring speech: an "average prosody". A generative model that can synthesise multiple prosodies will, by design, not model average prosody. We use variational autoencoders (VAEs) which explicitly place the most "average" data close to the mean of the Gaussian prior. We propose that by moving towards the tails of the prior distribution, the model will transition towards generating more idiosyncratic, varied renditions. Focusing here on intonation, we investigate the trade-off between naturalness and intonation variation and find that typical acoustic models can either be natural, or varied, but not both. However, sampling from the tails of the VAE prior produces much more varied intonation than the traditional approaches, whilst maintaining the same level of naturalness.

Journal ArticleDOI
04 Aug 2019-Foods
TL;DR: Results showed that consumers look at the whole product primarily to make decisions about naturalness, but also consider other factors as factors like ingredient familiarity and processing likely influence consumers when making decisions about product naturalness.
Abstract: Natural foods are important to consumers, yet frustrating to producers due to the lack of a formal definition of “natural”. Previous work has studied how consumers define naturalness and how they rate the naturalness of various products, but there is a gap in knowledge relating to how color and flavor additives impact perceptions. The objective of this study was to understand how colorants and flavorants on ingredient statements affect perceptions of naturalness. An online survey was launched in the United States, United Kingdom, and Australia to determine how consumers perceive products with ingredient statements containing different combinations of artificial and natural colors and flavors when shown with and without the product identity. Results showed that consumers look at the whole product primarily to make decisions about naturalness, but also consider other factors. Products derived from plants and products with natural colors and flavors were perceived to be the most natural. Artificial flavors may be more acceptable than artificial colors due to negative health perceptions and labeling rules associated with colors. Additionally, factors like ingredient familiarity and processing likely influence consumers when making decisions about product naturalness. Males, Millennials, and educated participants have higher naturalness scores than other participants in their respective demographics.

Journal ArticleDOI
TL;DR: In this article, the authors argue that no single parametrization constitutes the physically correct, fundamental parameterization of the theory, and the delicate cancellation between bare Higgs mass and quantum corrections appears as an eliminable artifact of the arbitrary, unphysical reference scale with respect to which the physical amplitudes of a theory are parametrized.
Abstract: The Higgs naturalness principle served as the basis for the so far failed prediction that signatures of physics beyond the Standard Model (SM) would be discovered at the LHC. One influential formulation of the principle, which prohibits fine tuning of bare Standard Model (SM) parameters, rests on the assumption that a particular set of values for these parameters constitute the “fundamental parameters” of the theory, and serve to mathematically define the theory. On the other hand, an old argument by Wetterich suggests that fine tuning of bare parameters merely reflects an arbitrary, inconvenient choice of expansion parameters and that the choice of parameters in an EFT is therefore arbitrary. We argue that these two interpretations of Higgs fine tuning reflect distinct ways of formulating and interpreting effective field theories (EFTs) within the Wilsonian framework: the first takes an EFT to be defined by a single set of physical, fundamental bare parameters, while the second takes a Wilsonian EFT to be defined instead by a whole Wilsonian renormalization group (RG) trajectory, associated with a one-parameter class of physically equivalent parametrizations. From this latter perspective, no single parametrization constitutes the physically correct, fundamental parametrization of the theory, and the delicate cancellation between bare Higgs mass and quantum corrections appears as an eliminable artifact of the arbitrary, unphysical reference scale with respect to which the physical amplitudes of the theory are parametrized. While the notion of fundamental parameters is well motivated in the context of condensed matter field theory, we explain why it may be superfluous in the context of high energy physics.

Posted Content
01 Aug 2019-viXra
TL;DR: In this paper, a model based on three ''assumptions'' is proposed, namely, geometric, electromagnetic, and electron mass, which is used to define the scale of space.
Abstract: We offer a model based upon three `assumptions'. The first is geometric, that the vacuum wavefunction is comprised of Euclid's fundamental geometric objects of space - point, line, plane, and volume elements - components of the geometric representation of Clifford algebra. The second is electromagnetic, that physical manifestation follows from introducing the dimensionless coupling constant \textbf{$\alpha$}. The third takes the electron mass to define the scale of space. Such a model is arguably maximally `natural'. Wavefunction interactions are modeled by the geometric product of Clifford algebra. What emerges is more naturalness. We offer an emergent definition.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the apparent failure of naturalness in cosmology and in the Standard Model and argue that any such naturalness failure threatens to undermine the entire structure of our understanding of inter-theoretic reduction, and so risks a much larger crisis in physics than is sometimes suggested.
Abstract: I develop an account of naturalness (that is, approximately: lack of extreme fine-tuning) in physics which demonstrates that naturalness assumptions are not restricted to narrow cases in high-energy physics but are a ubiquitous part of inter-level relations are derived in physics. After exploring how and to what extent we might justify such assumptions on methodological grounds or through appeal to speculative future physics, I consider the apparent failure of naturalness in cosmology and in the Standard Model. I argue that any such naturalness failure threatens to undermine the entire structure of our understanding of inter-theoretic reduction, and so risks a much larger crisis in physics than is sometimes suggested; I briefly review some currently-popular strategies that might avoid that crisis.

Proceedings ArticleDOI
01 Feb 2019
TL;DR: It is found that codeRefactoring does not necessarily increase the naturalness of the refactored code; and the impact on the code naturalness strongly depends on the type of refactoring operations.
Abstract: Recent studies have demonstrated that software is natural, that is, its source code is highly repetitive and predictable like human languages. Also, previous studies suggested the existence of a relationship between code quality and its naturalness, presenting empirical evidence showing that buggy code is “less natural” than non-buggy code. We conjecture that this qualitynaturalness relationship could be exploited to support refactoring activities (e.g., to locate source code areas in need of refactoring). We perform a first step in this direction by analyzing whether refactoring can improve the naturalness of code. We use state-of-the-art tools to mine a large dataset of refactoring operations performed in open source systems. Then, we investigate the impact of different types of refactoring operations on the naturalness of the impacted code. We found that (i) code refactoring does not necessarily increase the naturalness of the refactored code; and (ii) the impact on the code naturalness strongly depends on the type of refactoring operations.

Posted Content
TL;DR: In this paper, a cosmological vacuum relaxation of the Higgs mass that does not require any new physics in the vicinity of LHC energies acquires a new meaning, and explains how this concept of naturanless differs from the standard one by 't Hooft.
Abstract: In post LHC era the old idea of cosmological vacuum relaxation of the Higgs mass that does not require any new physics in the vicinity of LHC energies acquires a new meaning. I discuss how this concept of naturanless differs from the standard one by 't Hooft. Here the observed value of the Higgs mass corresponds to a vacuum of infinite degeneracy and infinite entropy. Therefore, it represents and attractor point of cosmic inflationary evolution. This information is unavailable for a low energy observer living in one of such vacua. By not seeing any stabilizing physics at LHC such an observer is puzzled and creates an artificial problem of naturalness which in reality does not exist. We explain why this solution is fully compatible with the concept of Wilsonian decoupling.

Proceedings ArticleDOI
15 Sep 2019
TL;DR: In this paper, a time-domain generative adversarial network (GAN) is used to deal with mixtures of four types of aggressive distortions. But the authors focus only on clipping, chunk elimination and frequency-band removal.
Abstract: The speech enhancement task usually consists of removing additive noise or reverberation that partially mask spoken utterances, affecting their intelligibility. However, little attention is drawn to other, perhaps more aggressive signal distortions like clipping, chunk elimination, or frequency-band removal. Such distortions can have a large impact not only on intelligibility, but also on naturalness or even speaker identity, and require of careful signal reconstruction. In this work, we give full consideration to this generalized speech enhancement task, and show it can be tackled with a time-domain generative adversarial network (GAN). In particular, we extend a previous GAN-based speech enhancement system to deal with mixtures of four types of aggressive distortions. Firstly, we propose the addition of an adversarial acoustic regression loss that promotes a richer feature extraction at the discriminator. Secondly, we also make use of a two-step adversarial training schedule, acting as a warm up-and-fine-tune sequence. Both objective and subjective evaluations show that these two additions bring improved speech reconstructions that better match the original speaker identity and naturalness.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: This work introduces a novel self-attention based encoder with learnable Gaussian bias in Mandarin TTS that has the ability to generate stable and natural speech with minimum language-dependent front-end modules and evaluates different systems with and without complex prosody information.
Abstract: Compared to conventional speech synthesis, end-to-end speech synthesis has achieved much better naturalness with more simplified system building pipeline. End-to-end framework can generate natural speech directly from characters for English. But for other languages like Chinese, recent studies have indicated that extra engineering features are still needed for model robustness and naturalness, e.g, word boundaries and prosody boundaries, which makes the front-end pipeline as complicated as the traditional approach. To maintain the naturalness of generated speech and discard language-specific expertise as much as possible, in Mandarin TTS, we introduce a novel self-attention based encoder with learnable Gaussian bias in Tacotron. We evaluate different systems with and without complex prosody information and results show that the proposed approach has the ability to generate stable and natural speech with minimum language-dependent front-end modules.

Journal ArticleDOI
TL;DR: In this paper, a model-independent framework was proposed to classify and study neutrino mass models and their phenomenology, which allows to study processes which do not violate lepton number.
Abstract: We propose a model-independent framework to classify and study neutrino mass models and their phenomenology. The idea is to introduce one particle beyond the Standard Model which couples to leptons and carries lepton number together with an operator which violates lepton number by two units and contains this particle. This allows to study processes which do not violate lepton number, while still working with an effective field theory. The contribution to neutrino masses translates to a robust upper bound on the mass of the new particle. We compare it to the stronger but less robust upper bound from Higgs naturalness and discuss several lower bounds. Our framework allows to classify neutrino mass models in just 20 categories, further reduced to 14 once nucleon decay limits are taken into account, and possibly to 9 if also Higgs naturalness considerations and direct searches are considered.

Journal ArticleDOI
11 Apr 2019-Forests
TL;DR: In this article, a conceptual model for the assessment of the impact of using wood on the quality of ecosystems is proposed, based on the condition of the forest and the proportion of different practices to characterize precisely the forest management strategy.
Abstract: Research Highlights: To inform eco-designers in green building conception, we propose a conceptual model for the assessment of the impact of using wood on the quality of ecosystems. Background and Objectives: The proposed model allows the assessment of the quality of ecosystems at the landscape level based on the condition of the forest and the proportion of different practices to characterize precisely the forest management strategy. The evaluation provides a numerical index, which corresponds to a suitable format to inform decision-making support tools, such as life cycle analysis. Materials and Methods: Based on the concept of naturalness, the methodology considers five naturalness characteristics (landscape context, forest composition, structure, dead wood, and regeneration process) and relies on forest inventory maps and data. An area within the boreal black spruce-feathermoss ecological domain of Quebec (Canada) was used as a case study for the development of the methodology, designed to be easily exportable. Results: In 2012, the test area had a near-natural class (naturalness index NI = 0.717). Simulation of different management strategies over 70 years shows that, considering 17.9% of strict protected areas, the naturalness index would have lost one to two classes of naturalness (out of five classes), depending on the strategy applied for the regeneration (0.206 ≤ ΔNI ≤ 0.413). Without the preservation of the protected areas, the management strategies would have further reduced the naturalness (0.274 ≤ ΔNI ≤ 0.492). Apart from exotic species plantation, the most sensitive variables are the percentage of area in irregular, old, and closed forests at time zero and the percentage of area in closed forests, late successional species groups, and modified wetlands after 70 years. Conclusions: Despite the necessity of further model and parameter validation, the use of the index makes it possible to combine the effects of different forestry management strategies and practices into one alteration gradient.

Journal ArticleDOI
TL;DR: In this paper, a low energy Lagrangian is constructed in which the information of compositeness and Higgs nonlinearity are encoded in the form factors, the two-point functions in the top sector.
Abstract: Composite Higgs and neutral-naturalness models are popular scenarios in which the Higgs boson is a pseudo Nambu-Goldstone boson (PNGB), and naturalness problem is addressed by composite top partners. Since the standard model effective field theory (SMEFT) with dimension-six operators cannot fully retain the information of Higgs nonlinearity due to its PNGB nature, we systematically construct low energy Lagrangian in which the information of compositeness and Higgs nonlinearity are encoded in the form factors, the two-point functions in the top sector. We classify naturalness conditions in various scenarios, and first present these form factors in composite neutral naturalness models. After extracting out Higgs effective couplings from these form factors and performing the global fit, we find the value of Higgs top coupling could still be larger than the standard model one if the top quark is embedded in the higher dimensional representations. Also we find the impact of Higgs nonlinearity is enhanced by the large mass splitting between composite states. In this case, pattern of the correlation between the t $$ \overline{t} $$ h and t $$ \overline{t} $$ hh couplings is quite different for the linear and nonlinear Higgs descriptions.

Journal ArticleDOI
TL;DR: In this paper, the authors distinguish two notions of naturalness employed in beyond the standard model (BSM) physics and argue that recognizing this distinction has methodological consequences, and they argue that these two notions are historically and conceptually related but are motivated by distinct theoretical considerations and admit of distinct kinds of solution.
Abstract: My aim in this paper is twofold: (i) to distinguish two notions of naturalness employed in beyond the standard model (BSM) physics and (ii) to argue that recognizing this distinction has methodological consequences. One notion of naturalness is an “autonomy of scales” requirement: it prohibits sensitive dependence of an effective field theory’s low-energy observables on precise specification of the theory’s description of cutoff-scale physics. I will argue that considerations from the general structure of effective field theory provide justification for the role this notion of naturalness has played in BSM model construction. A second, distinct notion construes naturalness as a statistical principle requiring that the values of the parameters in an effective field theory be “likely” given some appropriately chosen measure on some appropriately circumscribed space of models. I argue that these two notions are historically and conceptually related but are motivated by distinct theoretical considerations and admit of distinct kinds of solution.

Proceedings ArticleDOI
20 Sep 2019
TL;DR: In this paper, problem-agnostic speech embeddings are used in a multi-speaker acoustic model for text-to-speech (TTS) based on SampleRNN.
Abstract: Text-to-speech (TTS) acoustic models map linguistic features into an acoustic representation out of which an audible waveform is generated. The latest and most natural TTS systems build a direct mapping between linguistic and waveform domains, like SampleRNN. This way, possible signal naturalness losses are avoided as intermediate acoustic representations are discarded. Another important dimension of study apart from naturalness is their adaptability to generate voice from new speakers that were unseen during training. In this paper we first propose the use of problem-agnostic speech embeddings in a multi-speaker acoustic model for TTS based on SampleRNN. This way we feed the acoustic model with speaker acoustically dependent representations that enrich the waveform generation more than discrete embeddings unrelated to these factors. Our first results suggest that the proposed embeddings lead to better quality voices than those obtained with discrete embeddings. Furthermore, as we can use any speech segment as an encoded representation during inference, the model is capable to generalize to new speaker identities without retraining the network. We finally show that, with a small increase of speech duration in the embedding extractor, we dramatically reduce the spectral distortion to close the gap towards the target identities.

Journal ArticleDOI
TL;DR: A recoloring algorithm that combines contrast enhancement and naturalness preservation in a unified optimization model is proposed and revealed that the proposed method obtained the best scores in preserving both naturalness and information for individuals with severe red–green CVD.
Abstract: Color vision deficiency (CVD) is caused by anomalies in the cone cells of the human retina. It affects approximately 200 million individuals throughout the world. Although previous studies have proposed compensation methods, contrast and naturalness preservation have not been adequately and simultaneously addressed in the state-of-the-art studies. This paper focuses on red–green dichromats’ compensation and proposes a recoloring algorithm that combines contrast enhancement and naturalness preservation in a unified optimization model. In this implementation, representative color extraction and edit propagation methods are introduced to maintain global and local information in the recolored image. The quantitative evaluation results showed that the proposed method is competitive with state-of-the-art methods. A subjective experiment was also conducted and the evaluation results revealed that the proposed method obtained the best scores in preserving both naturalness and information for individuals with severe red–green CVD.

Journal ArticleDOI
TL;DR: Naturalness is one of the most studied concepts in spatial planning; it is considered an important criterion, especially for nature conservation as mentioned in this paper, which is the objective of assessing naturalness is t...
Abstract: Naturalness is one of the most studied concepts in spatial planning; it is considered an important criterion, especially for nature conservation. The objective of assessing naturalness is t...

Journal ArticleDOI
TL;DR: In this paper, the issue of urban inhabitants' appreciation of the naturalness of the landscape provided that people living in urban areas can benefit from the green space and would like to care more about its protection is considered.
Abstract: This paper considers the issue of urban inhabitants’ appreciation of the naturalness of the landscape provided that people living in urban areas can benefit from the green space and would like to care more about its protection. This study examines: (1) Warsaw inhabitants’ preferences with regard to places to spend free time outdoors; (2) public perception of the advantages and disadvantages of the semi-natural Vistula riverfront; and (3) people’s connectedness to nature and willingness to donate funds to modernize the riverfront (N = 630). We applied a questionnaire method based on the computer-assisted web interview. The findings suggest that Warsaw residents appreciate the naturalness of the landscape at the Vistula riverfront, would not like to take direct responsibility for its condition, and would rather the municipality invest in public spaces. Therefore, the municipality of Warsaw should work to enhance inhabitants’ attachment to the place and build a sense of common responsibility for the protection of the riverfront’s natural environment.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the epistemic attitudes of particle physicists on the discovery of the Higgs boson at the Large Hadron Collider (LHC), based on questionnaires and interviews made shortly before and shortly after the discovery in 2012.
Abstract: Our paper discusses the epistemic attitudes of particle physicists on the discovery of the Higgs boson at the Large Hadron Collider (LHC). It is based on questionnaires and interviews made shortly before and shortly after the discovery in 2012. We show, to begin with, that the discovery of a Standard Model (SM) Higgs boson was less expected than is sometimes assumed. Once the new particle was shown to have properties consistent with SM expectations – albeit with significant experimental uncertainties –, there was a broad agreement that ‘a’ Higgs boson had been found. Physicists adopted a two-pronged strategy. On the one hand, they treated the particle as a SM Higgs boson and tried to establish its properties with higher precision; on the other hand, they searched for any hints of physics beyond the SM. This motivates our first philosophical thesis: the Higgs discovery, being of fundamental importance and establishing a new kind of particle, represented a crucial experiment if one interprets this notion in an appropriate sense. By embedding the LHC into the tradition of previous precision experiments and the experimental strategies thus established, underdetermination and confirmational holism are kept at bay. Second, our case study suggests that criteria of theory (or model) preference should be understood as epistemic and pragmatic values that have to be weighed in factual research practice. The Higgs discovery led to a shift from pragmatic to epistemic values as regards the mechanisms of electroweak symmetry breaking. Complex criteria, such as naturalness, combine epistemic and pragmatic different values, but are coherently applied by the community.

Book ChapterDOI
09 Sep 2019
TL;DR: In this paper, the authors evaluate the usefulness of audio transformations for voice-only question answering and introduce a crowdsourcing setup evaluating the quality of their proposed modifications along multiple dimensions corresponding to the informativeness, naturalness and ability of users to identify key parts of the answer.
Abstract: Many popular form factors of digital assistants—such as Amazon Echo or Google Home—enable users to converse with speech-based systems. The lack of screens presents unique challenges. To satisfy users’ information needs, the presentation of answers has to be optimized for voice-only interactions. We evaluate the usefulness of audio transformations (i.e., prosodic modifications) for voice-only question answering. We introduce a crowdsourcing setup evaluating the quality of our proposed modifications along multiple dimensions corresponding to the informativeness, naturalness, and ability of users to identify key parts of the answer. We offer a set of prosodic modifications that highlight potentially important parts of the answer using various acoustic cues. Our experiments show that different modifications lead to better comprehension at the expense of slightly degraded naturalness of the audio.

Journal ArticleDOI
TL;DR: In this paper, a low energy Lagrangian is constructed in which the information of compositeness and Higgs nonlinearity are encoded in the form factors, the two-point functions in the top sector.
Abstract: Composite Higgs and neutral-naturalness models are popular scenarios in which the Higgs boson is a pseudo Nambu-Goldstone boson, and naturalness problem is addressed by composite top partners. Since the standard model effective field theory (SMEFT) with dimension-six operators cannot fully retain the information of Higgs nonlinearity due to its PNGB nature, we systematically construct low energy Lagrangian in which the information of compositeness and Higgs nonlinearity are encoded in the form factors, the two-point functions in the top sector. We classify naturalness conditions in various scenarios, and first present these form factors in composite neutral naturalness models. After extracting out Higgs effective couplings from these form factors and performing the global fit, we find the value of Higgs top coupling could still be larger than the standard model one if the top quark is embedded in the higher dimensional representations. Also we find the impact of Higgs nonlinearity is enhanced by the large mass splitting between composite states. In this case, pattern of the correlation between the $t\bar{t}h$ and $t\bar{t}hh$ couplings is quite different for the linear and nonlinear Higgs descriptions.