scispace - formally typeset
Open AccessProceedings Article

You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

Reads0
Chats0
TLDR
In this article, the authors demonstrate that neural code autocompletters are vulnerable to poisoning attacks by adding a few specially-crafted files to the autocomplete's training corpus (data poisoning), or by directly fine-tuning the auto-completter on these files (model poisoning), the attacker can influence its suggestions for attacker-chosen contexts.
Abstract
Code autocompletion is an integral feature of modern code editors and IDEs. The latest generation of autocompleters uses neural language models, trained on public open-source code repositories, to suggest likely (not just statically feasible) completions given the current context. We demonstrate that neural code autocompleters are vulnerable to poisoning attacks. By adding a few specially-crafted files to the autocompleter's training corpus (data poisoning), or else by directly fine-tuning the autocompleter on these files (model poisoning), the attacker can influence its suggestions for attacker-chosen contexts. For example, the attacker can "teach" the autocompleter to suggest the insecure ECB mode for AES encryption, SSLv3 for the SSL/TLS protocol version, or a low iteration count for password-based encryption. Moreover, we show that these attacks can be targeted: an autocompleter poisoned by a targeted attack is much more likely to suggest the insecure completion for files from a specific repo or specific developer. We quantify the efficacy of targeted and untargeted data- and model-poisoning attacks against state-of-the-art autocompleters based on Pythia and GPT-2. We then evaluate existing defenses against poisoning attacks and show that they are largely ineffective.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

On the Opportunities and Risks of Foundation Models.

Rishi Bommasani, +113 more
- 16 Aug 2021 - 
TL;DR: The authors provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e. g.g. model architectures, training procedures, data, systems, security, evaluation, theory) to their applications.
Proceedings ArticleDOI

Concealed Data Poisoning Attacks on NLP Models.

TL;DR: This work develops a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input.
Journal Article

BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models

TL;DR: This work proposes the first backdoor attack against autoencoders and GANs where the adversary can control what the decoded or generated images are when the backdoor is activated, and shows that the adversary could build a backdoored autoencoder that returns a target output for all backdooring inputs, while behaving perfectly normal on clean inputs.
Posted Content

Program Synthesis with Large Language Models.

TL;DR: In this article, the authors explore the limits of the current generation of large language models for program synthesis in general purpose programming languages and evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few shot and fine-tuning regimes.
Related Papers (5)