The enormous power and potential danger of the Ars Technica code generated by artificial intelligence

The enormous power and potential danger of AI-generated code

In June 2021, GitHub announced Copilot, a kind of autocomplete for computer code based on OpenAI’s text generation technology. It provided a first glimpse of the impressive potential of generative AI to automate valuable work. Two years later, Copilot is one of the more mature examples of how technology can accomplish tasks that previously had to be done manually.

GitHub released a report this week, based on data from nearly a million programmers who pay to use Copilot, showing how transformative AI generative coding has become. On average, they accepted suggestions from AI assistants about 30% of the time, suggesting that the system is remarkably good at predicting useful code.

Git Hub

The striking graph above shows how users tend to accept more Copilots suggestions as they spend more months using the tool. The report also concludes that AI-enhanced programmers see their productivity increase over time, based on the fact that a previous Copilot study reported a link between the number of suggestions accepted and a programmer’s productivity. GitHub’s new report claims that the greatest productivity gains were seen among less experienced developers.

At first glance, this is an impressive picture of a new technology that is quickly proving its worth. Any technology that boosts productivity and enhances the skills of low-skilled workers could be a boon to both individuals and the wider economy. GitHub continues to offer some hidden speculation, estimating that AI coding could increase global GDP by $1.5 trillion by 2030.

But the GitHub graph showing programmers tying into Copilot reminded me of another study I heard about recently while chatting with Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, on programmers’ relationship with tools like Copilot.

Late last year, a team from Stanford University released a research paper that examined how using an AI assistant that generates code they’ve created affects the quality of the code people produce. The researchers found that programmers who received AI cues tended to include more bugs in their final code, but those with access to the tool tended to believe their code was Moreover Safe. There are likely both benefits and risks to coding in tandem with AI, Ringer says. More code is not better code.

Given the nature of programming, this finding is not surprising. As Clive Thompson wrote in a 2022 WIRED feature, Copilot may seem miraculous, but his suggestions are based on patterns in the work of other programmers, which may be flawed. These assumptions can create bugs that are fiendishly hard to spot, especially when you’re hesitating with how often the tool is good.

We know from other areas of engineering that humans can be lulled into over-reliance on automation. The US Federal Aviation Authority has repeatedly warned that some pilots are becoming so reliant on autopilot that their flying capabilities are atrophying. A similar phenomenon is familiar to self-driving cars, where extraordinary vigilance is required to guard against rare but potentially deadly anomalies.

This paradox could be central to the developing story of generative AI and where it will lead us. Technology already appears to be driving a downward spiral in web content quality, as trusted sites are inundated with AI-generated dross, spam websites proliferate, and chatbots seek to artificially boost engagement.

None of this is to say that generative AI is a failure. There is a growing body of research showing how AI tools can increase the performance and happiness of some workers, such as those handling customer service calls. Other studies also found no increase in security bugs when developers use an AI assistant. And to its credit, GitHub is investigating the question of how to code securely with the assistance of AI. In February, it announced a new Copilot feature that attempts to detect vulnerabilities generated by the underlying model.

But the complex effects of code generation provide a cautionary tale for companies working to implement generative algorithms for other use cases.

Regulators and lawmakers who show more concern about AI should also take note. With so much enthusiasm for the technology’s potential and wild speculations about how it could take over the world, more subtle but even more substantial evidence of how well AI implementations are working may be overlooked. Almost everything in our future will be underpinned by software, and if you’re not careful, it could even be riddled with AI-generated bugs.

This story originally appeared on wired. com.

#enormous #power #potential #danger #Ars #Technica #code #generated #artificial #intelligence
Image Source : arstechnica.com

Leave a Comment