scispace - formally typeset
Search or ask a question
Topic

Naturalness

About: Naturalness is a research topic. Over the lifetime, 1305 publications have been published within this topic receiving 31737 citations.


Papers
More filters
Proceedings ArticleDOI
21 Jan 2022
TL;DR: This paper proposes ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs and investigates the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure.
Abstract: Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement. In this paper, we propose ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs. Different from prior works, this paper considers the natural semantic of generated examples at the same time as preserving the operational semantic of original inputs. Our user study demonstrates that human developers consistently consider that adversarial examples generated by ALERT are more natural than those generated by the state-of-the-art work by Zhang et al. that ignores the naturalness requirement. On attacking CodeBERT, our approach can achieve attack success rates of 53.62%, 27.79%, and 35.78% across three downstream tasks: vulnerability prediction, clone detection and code authorship attribution. On GraphCodeBERT, our approach can achieve average success rates of 76.95%, 7.96% and 61.47% on the three tasks. The above outperforms the baseline by 14.07% and 18.56% on the two pretrained models on average. Finally, we investigated the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure and demonstrated the accuracy of CodeBERT and GraphCodeBERT against ALERT-generated adversarial examples increased by 87.59% and 92.32%, respectively.

39 citations

Journal ArticleDOI
TL;DR: The running couplings can be understood as arising from the spontaneous breaking of an exact scale invariance in appropriate effective quantum field theories with no dilatation anomaly as discussed by the authors, which can be embedded into a quantum field theory with spontaneously broken exact scale-invariant in such a way that the ordinary running is recovered in the appropriate limit.
Abstract: Running couplings can be understood as arising from the spontaneous breaking of an exact scale invariance in appropriate effective theories with no dilatation anomaly. Any ordinary quantum field theory, even if it has massive fields, can be embedded into a theory with spontaneously broken exact scale invariance in such a way that the ordinary running is recovered in the appropriate limit, as long as the potential has a flat direction. These scale-invariant theories, however, do not necessarily solve the cosmological constant or naturalness problems, which become manifest in the need to fine-tune dimensionless parameters.

39 citations

Journal ArticleDOI
TL;DR: The running couplings can be understood as arising from the spontaneous breaking of an exact scale invariance in appropriate effective quantum field theories with no dilatation anomaly as discussed by the authors, which can be embedded into a quantum field theory with spontaneously broken exact scale-invariant in such a way that the ordinary running is recovered in the appropriate limit.
Abstract: Running couplings can be understood as arising from the spontaneous breaking of an exact scale invariance in appropriate effective theories with no dilatation anomaly. Any ordinary quantum field theory, even if it has massive fields, can be embedded into a theory with spontaneously broken exact scale invariance in such a way that the ordinary running is recovered in the appropriate limit, as long as the potential has a flat direction. These scale-invariant theories, however, do not necessarily solve the cosmological constant or naturalness problems, which become manifest in the need to fine-tune dimensionless parameters.

39 citations

01 Jan 1995
TL;DR: In this article, the relation between perceptual image quality and naturalness was investigated by varying the colorfulness and hue of color images of natural scenes, and the results showed that both quality and the naturalness deteriorate as soon as hues start to deviate from the ones in the original image.
Abstract: The relation between perceptual image quality and naturalness was investigated by varying the colorfulness and hue of color images of natural scenes. These variations were created by digitizing the images, subsequently determining their color point distributions in the CIELUV color space and finally multiplying either the chroma value of the hue angle of each pixel by a constant. During the chroma/hue-angle transformation the lightness and hue-angle/chroma value of each pixel were kept constant. Ten subjects rated quality and naturalness on numerical scales. The results show that both quality and naturalness deteriorate as soon as hues start to deviate from the ones in the original image. Chroma variation affected the impression of quality and naturalness to a lesser extent than did hue variation. In general, a linear relation was found between image quality and naturalness. For chroma variation, however, a small but systematic deviation could be observed. This deviation reflects the subjects' preference for more colorful but, at the same time, somewhat unnatural images.

38 citations


Network Information
Related Topics (5)
Statistical model
19.9K papers, 904.1K citations
69% related
Sentence
41.2K papers, 929.6K citations
69% related
Vocabulary
44.6K papers, 941.5K citations
67% related
Detector
146.5K papers, 1.3M citations
67% related
Cluster analysis
146.5K papers, 2.9M citations
66% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023282
2022610
202182
202063
201983
201852