![]() |
Aardvark DailyThe world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk |
![]() Please visit the sponsor! |
As if yesterday's column wasn't sobering enough, now we've been hit with yet another warning about the dangers of AI.
This time it strikes at the heart of one area where AI has been seemingly providing huge productivity gains: programming.
Most AI systems can churn out programming code in a wide range of languages, sometimes even creating complex applications from just a series of plain-English commands.
AI is also starting to grab a pretty good foothold in the area of maintaining legacy code where documentation or programmers skills are now scarce.
However, latest reports indicate that there's a potentially significant risk associated with having Gemini, Grok, Claude Sonnet or any other AI cut your code for you.
This risk boils down to the way that these AI agents have accumulated their wealth of knowledge and deductive reasoning abilities.
Some have also suggested that in the case of AI such as China's DeepSeek, the risk may come from things hidden deep within the datasets being used.
In short, when an AI produces program code, it does so by referring to a huge set of data which it then uses to figure out how to create the required outcome from the user-generated requirements. Analysis of the resulting programs often throws up chunks of code that are pretty much just ripped out of examples that were scraped from the Net and added to that dataset from which it works.
So what if that dataset contains routines that actually create backdoors or deliberate vulnerabilities and the AI unknowingly inserts those into the new programs it creates?
The whole idea of having AI generate your code is that it saves you a huge amount of time, effort and cost -- so it's unlikely that those benefits will be minimised by having a skilled programmer inspect every line of the programs that are generated so as to establish the "safety" of the resultant output.
It stands to reason therefore that as big and small tech alike place an increased reliance on AI (Microsoft uses AI for 30 percent of its new code) there's a growing chance that some of that code will contain unidentified vulnerabilities.
I wonder how many hackers and state-sponsored players are, right now, working hard to "infect" the datasets of AI agents with code that they hope will be integrated into the programs that will be automatically created.
Not only could this result in covert back-door access to massive amounts of supposedly secure data on AI-generated sorftware systems but there's also significant potential for ransomware attacks using the same vector.
The big problem with AI right now is that we're dealing with systems that have such huge datasets that it's pretty much beyond our abilities to validate all that data as "safe" and weed out anything that might compromise the security of code they generate.
Right now it appears that the mitigation strategy consists of crossing your fingers and hoping. That's a strategy that works perfectly well -- until it doesn't and, by the time you find out that things have gone awry it's way too late to prevent wide-scale disaster.
I predict some "interesting times" ahead for the IT industry and quietly await the first reported instance of such a vulnerability being exploited in AI-generated code on some kind of critical software system.
Carpe Diem folks!
![]() Please visit the sponsor! |
Here is a PERMANENT link to this column
Beware The Alternative Energy Scammers
The Great "Run Your Car On Water" Scam