Dec 29 – Right now, machine learning algorithms are all the rage when it comes to generating “original” content, after being trained on pre-existing datasets. However, code-generating AI could present issues for software security.
AI-assisted code and software security
Select AI systems, like GitHub Copilot, intend to simplify the work of programmers’ by creating entire blocks of “new” code based on natural-language textual inputs and pre-existing context. At the same time, code-generating algorithms can result in code insecurity challenges.
Research findings: AI-assisted solutions
According to the researchers, when the programmers had access to the Codex AI, the code output at the end of the project was more likely incorrect or insecure, as compared with the “hand-made” solutions developed by the control group.
Further, the programmers with the AI-assisted solutions were more likely to state that their insecure code was sufficiently secure as compared with that of their control group peers.
Code-generating systems for developers
PhD candidate at Stanford and the study lead co-author, Neil Perry, stated that code-generating systems are not good enough to serve as replacements for developers at this time. While code-generating systems can by useful for low-risk tasks, Perry advises developers to always double-check generated code.
Organizations, AI and cyber security
Owners of AI algorithms can also potentially fine-tune the algorithms themselves in order to improve coding suggestions. Further, organizations may wish to develop their own systems in order to derive better code solutions that are in-line with their own security best practices.
For many, code-generating technology is an “exciting” development. However, there is still a lot of work to be done in order to perfect systems.