Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo. From a report: In a paper titled, "Do Users Write More Insecure Code with AI Assistants?", Stanford boffins Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh answer that question in the affirmative. Worse still, they found that AI help tends to delude developers about the quality of their output. "We found that participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection," the authors state in their paper.

Participants who were given access to an artificial intelligence assistant were more likely to believe that they wrote secure code than those who weren't. NYU researchers have shown that artificial intelligence-based programming suggestions are not always secure. A research paper titled "A sleep at the keyboard?" was published in August of 2021. Forty percent of the computer programs made with the help of Copilot had potentially exploitable vulnerabilities, according to an assessment of the security of the code contributions.

The study is limited in scope because it only considers 25 vulnerabilities and only three programming languages. "Security Implications of Large Language Model Code Assistants: A User Study" is the only comparable user study they are aware of. Their work is different because it focuses on OpenAI's codex-davinci-002 model rather than the less powerful codex-cushman-001 model.