Study finds AI assistants help developers produce code that’s more likely to be buggy


Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo.

In a paper titled, “Do Users Write More Insecure Code with AI Assistants?”, Stanford boffins Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh answer that question in the affirmative.

Worse still, they found that AI help tends to delude developers about the quality of their output.

“We found that participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection,” the authors state in their paper. “Surprisingly, we also found that participants provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant.”

Read more…