scispace - formally typeset
Search or ask a question
Topic

Naturalness

About: Naturalness is a research topic. Over the lifetime, 1305 publications have been published within this topic receiving 31737 citations.


Papers
More filters
Posted Content
TL;DR: In this paper, the implications of first LHC results for models motivated by the hierarchy problem are discussed, and bounds, global fits and implications for naturalness are presented for these models.
Abstract: We discuss implications of first LHC results for models motivated by the hierarchy problem: large extra dimensions and supersymmetry. We present bounds, global fits and implications for naturalness.

18 citations

Proceedings ArticleDOI
25 Oct 2020
TL;DR: The authors investigated to what extent multilingual multi-speaker modeling can be an alternative to monolingual multi-Speaker modeling, and explored how data from foreign languages may best be combined with low-resource language data.
Abstract: Recent advances in neural TTS have led to models that can produce high-quality synthetic speech. However, these models typically require large amounts of training data, which can make it costly to produce a new voice with the desired quality. Although multi-speaker modeling can reduce the data requirements necessary for a new voice, this approach is usually not viable for many low-resource languages for which abundant multi-speaker data is not available. In this paper, we therefore investigated to what extent multilingual multi-speaker modeling can be an alternative to monolingual multi-speaker modeling, and explored how data from foreign languages may best be combined with low-resource language data. We found that multilingual modeling can increase the naturalness of low-resource language speech, showed that multilingual models can produce speech with a naturalness comparable to monolingual multi-speaker models, and saw that the target language naturalness was affected by the strategy used to add foreign language data.

18 citations

Proceedings ArticleDOI
11 Oct 2018
TL;DR: This work investigates the use of natural language modelling techniques in mutation testing (a testing technique that uses artificial faults) to identify how well artificial faults simulate real ones and ultimately understand how natural the artificial faults can be.
Abstract: Background: Code is repetitive and predictable in a way that is similar to the natural language. This means that code is "natural" and this "naturalness" can be captured by natural language modelling techniques. Such models promise to capture the program semantics and identify source code parts that `smell', i.e., they are strange, badly written and are generally error-prone (likely to be defective). Aims: We investigate the use of natural language modelling techniques in mutation testing (a testing technique that uses artificial faults). We thus, seek to identify how well artificial faults simulate real ones and ultimately understand how natural the artificial faults can be. Our intuition is that natural mutants, i.e., mutants that are predictable (follow the implicit coding norms of developers), are semantically useful and generally valuable (to testers). We also expect that mutants located on unnatural code locations (which are generally linked with error-proneness) to be of higher value than those located on natural code locations. Method: Based on this idea, we propose mutant selection strategies that rank mutants according to a) their naturalness (naturalness of the mutated code), b) the naturalness of their locations (naturalness of the original program statements) and c) their impact on the naturalness of the code that they apply to (naturalness differences between original and mutated statements). We empirically evaluate these issues on a benchmark set of 5 open-source projects, involving more than 100k mutants and 230 real faults. Based on the fault set we estimate the utility (i.e. capability to reveal faults) of mutants selected on the basis of their naturalness, and compare it against the utility of randomly selected mutants. Results: Our analysis shows that there is no link between naturalness and the fault revelation utility of mutants. We also demonstrate that the naturalness-based mutant selection performs similar (slightly worse) to the random mutant selection. Conclusions: Our findings are negative but we consider them interesting as they confute a strong intuition, i.e., fault revelation is independent of the mutants' naturalness.

18 citations

Journal ArticleDOI
TL;DR: A unified CQ model was developed with a multiple nonlinear regression equation combining the Illuminating Engineering Society of North America color rendition method that accords satisfactorily with the subjective evaluation, while being applicable to a wide range of CCTs.
Abstract: Considering that the existing color quality (CQ) metrics for light sources cannot correlate well with the subjective evaluation, in an immersive environment equipped with a multichannel LED light source, a psychophysical experiment by categorical judgment method was carried out to assess the three perception-related CQ attributes of light sources in terms of naturalness, colorfulness, and preference. The experiment collected the subjective responses to these attributes of up to 41 metameric spectra at each of four test correlated color temperatures (CCTs) ranging from 2800 to 6500 K, which covers the usual white-light range for general lighting. The results indicate that preference exhibits relatively high correlation with naturalness and colorfulness, and naturalness is weakly related to colorfulness. Besides, 20 typical CQ metrics were adopted to examine their validity in characterizing the subjective data, confirming their limited performance. Meanwhile, the underlying relationship of these metrics and the subjective data was also analyzed by the multidimensional scaling, revealing that almost all metrics can correspond to one attribute of naturalness, colorfulness, and preference, and that the saturation level is identified as a critical factor affecting these attributes. Based on these results, a unified CQ model was developed with a multiple nonlinear regression equation combining the Illuminating Engineering Society of North America color rendition method. The model accords satisfactorily with the subjective evaluation, while being applicable to a wide range of CCTs.

18 citations

Journal ArticleDOI
TL;DR: In this article, a naturalness bound on the SUSY Higgs masses was derived from the naturalness of the superpartner signatures of the heavy Higgs bosons at the LHC.
Abstract: We explore naturalness constraints on the masses of the heavy Higgs bosons H 0 , H ±, and A 0 in supersymmetric theories. We show that, in any extension of MSSM which accommodates the 125 GeV Higgs at the tree level, one can derive an upper bound on the SUSY Higgs masses from naturalness considerations. As is well-known for the MSSM, these bounds become weak at large tan β. However, we show that measurements of b → sγ together with naturalness arguments lead to an upper bound on tan β, strengthening the naturalness case for heavy Higgs states near the TeV scale. The precise bound depends somewhat on the SUSY mediation scale: allowing a factor of 10 tuning in the stop sector, the measured rate of b → sγ implies tan β ≲ 30 for running down from 10 TeV but tan β ≲ 4 for mediation at or above 100 TeV, placing m A near the TeV scale for natural EWSB. Because the signatures of heavy Higgs bosons at colliders are less susceptible to being “hidden” than standard superpartner signatures, there is a strong motivation to make heavy Higgs searches a key part of the LHC’s search for naturalness. In an appendix we comment on how the Goldstone boson equivalence theorem links the rates for H → hh and H → ZZ signatures.

18 citations


Network Information
Related Topics (5)
Statistical model
19.9K papers, 904.1K citations
69% related
Sentence
41.2K papers, 929.6K citations
69% related
Vocabulary
44.6K papers, 941.5K citations
67% related
Detector
146.5K papers, 1.3M citations
67% related
Cluster analysis
146.5K papers, 2.9M citations
66% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023282
2022610
202182
202063
201983
201852