Participants who were given access to an artificial intelligence assistant were more likely to believe that they wrote secure code than those who weren't. NYU researchers have shown that artificial intelligence-based programming suggestions are not always secure. A research paper titled "A sleep at the keyboard?" was published in August of 2021. Forty percent of the computer programs made with the help of Copilot had potentially exploitable vulnerabilities, according to an assessment of the security of the code contributions.
The study is limited in scope because it only considers 25 vulnerabilities and only three programming languages. "Security Implications of Large Language Model Code Assistants: A User Study" is the only comparable user study they are aware of. Their work is different because it focuses on OpenAI's codex-davinci-002 model rather than the less powerful codex-cushman-001 model.