EXECUTIVE SUMMARY:

Newly emerging tools enable developers to create software in partnership with artificial intelligence (AI). However, experiments show that code written through AI is as imperfect as human-generated code. 

Subscribe to our cybersecurity newsletter for the latest information.In June, the software development platform GitHub rolled out a beta version of a code development program for professional web developers that leverages AI. Once developers type in a command, a database query, or an API request, the program predicts the coder’s intent and automatically writes the remainder of the text. 

Data scientist Alex Naka, who has used this software, reports that he can now spend less time looking up API docs or examples of code. Rather, he says that his role seems to have shifted from that of a code developer to someone who oversees code discrimination. 

Nonetheless, despite this novel use of AI, and the hope to eliminate all bugs, errors have appeared in AI-generated code. Humans who have used AI coding programs report that these programs can miss subtle errors when approving program code proposals. 

Will AI-powered tools transform software development as we know it?

Theoretically, it could enable humans to shift from more mundane tasks to higher-level projects. In a similar vein, when vendors began to integrate AI into cyber security tools, cyber security professionals could switch from tedious tasks to more demanding and higher-priority activities. 

Research indicates that code generated by an AI-based program known as Copilot has security flaws roughly 40% of the time. That said, the program was developed to produce text that conformed to specific prompts. It wasn’t exactly trained to create quality code. As more AI-based code development programs emerge and improve, using them could potentially yield business benefits. 

AI-generated code and security

When it comes to AI-generated computer code, security represents a prime concern. Despite the high rate of security flaws in AI-generated code, researchers state that this may be relegated to certain subsets of code. Other types of code could be less flawed and more secure. Further research is necessary.

If using the Copilot program, as referred to above, GitHub recommends for developers to combine use of Copilot with CodeQL. This can help developers ensure that code is safe. 

Postdoctoral researcher at NYU, Hammond Pearce, states that self-generating code programs can yield problematic code due to the algorithms’ lack of “comprehension” around the code’s purpose. In accounting for this, developers require strong backgrounds in the type of code being produced as to identify bugs quickly and accurately. 

Another security concern stems from the fact that a bad actor could theoretically create vulnerable code projects on GitHub, artificially inflate their popularity through the purchase of GitHub stars on the black market, and then hope that it will be used in professional projects. 

Automating developers out of work?

The creation of AI-based coding programs leads some developers to wonder if their jobs will become obsolete. At this stage in the game, experts point out that AI-based programs still require considerable management on the part of developers, as they must review and sometimes modify the program’s suggestions. 

In summary

Artificial intelligence programs can write code, but their capacities are limited and the code is error-prone. At present, AI-based coding programs do not pose threats to professional coders. Rather, AI-based code development tools could potentially boost human productivity as they improve and evolve. For more on this story, click here. Get more timely stories, business analyses and robust resources when you sign up for the Cyber Talk newsletter.