Google
 

Aardvark Daily

The world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

Another AI risk

2 May 2025

As if yesterday's column wasn't sobering enough, now we've been hit with yet another warning about the dangers of AI.

This time it strikes at the heart of one area where AI has been seemingly providing huge productivity gains: programming.

Most AI systems can churn out programming code in a wide range of languages, sometimes even creating complex applications from just a series of plain-English commands.

AI is also starting to grab a pretty good foothold in the area of maintaining legacy code where documentation or programmers skills are now scarce.

However, latest reports indicate that there's a potentially significant risk associated with having Gemini, Grok, Claude Sonnet or any other AI cut your code for you.

This risk boils down to the way that these AI agents have accumulated their wealth of knowledge and deductive reasoning abilities.

Some have also suggested that in the case of AI such as China's DeepSeek, the risk may come from things hidden deep within the datasets being used.

In short, when an AI produces program code, it does so by referring to a huge set of data which it then uses to figure out how to create the required outcome from the user-generated requirements. Analysis of the resulting programs often throws up chunks of code that are pretty much just ripped out of examples that were scraped from the Net and added to that dataset from which it works.

So what if that dataset contains routines that actually create backdoors or deliberate vulnerabilities and the AI unknowingly inserts those into the new programs it creates?

The whole idea of having AI generate your code is that it saves you a huge amount of time, effort and cost -- so it's unlikely that those benefits will be minimised by having a skilled programmer inspect every line of the programs that are generated so as to establish the "safety" of the resultant output.

It stands to reason therefore that as big and small tech alike place an increased reliance on AI (Microsoft uses AI for 30 percent of its new code) there's a growing chance that some of that code will contain unidentified vulnerabilities.

I wonder how many hackers and state-sponsored players are, right now, working hard to "infect" the datasets of AI agents with code that they hope will be integrated into the programs that will be automatically created.

Not only could this result in covert back-door access to massive amounts of supposedly secure data on AI-generated sorftware systems but there's also significant potential for ransomware attacks using the same vector.

The big problem with AI right now is that we're dealing with systems that have such huge datasets that it's pretty much beyond our abilities to validate all that data as "safe" and weed out anything that might compromise the security of code they generate.

Right now it appears that the mitigation strategy consists of crossing your fingers and hoping. That's a strategy that works perfectly well -- until it doesn't and, by the time you find out that things have gone awry it's way too late to prevent wide-scale disaster.

I predict some "interesting times" ahead for the IT industry and quietly await the first reported instance of such a vulnerability being exploited in AI-generated code on some kind of critical software system.

Carpe Diem folks!

Please visit the sponsor!
Please visit the sponsor!

Here is a PERMANENT link to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

The EZ Battery Reconditioning scam

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

Recent Columns

Is AI lying to us?
Today I asked Google's Gemini if it was a threat to mankind...

When a computer game becomes reality
If, like me, you were into Atari computer gaming back in the late 1970s and early 1980s then...

The evil effect of #TheNarrative
As regular readers will know, I fly RC model aircraft and drones -- or at least I used to...

Explain me this
We live in a world that gets weirder by the day...

Cray versus Raspberry Pi
I fondly recall the era when the pinnacle of supercomputing was the Cray 1...

The end of the information age?
I am getting increasingly tired of the hypocrisy being engaged in by big-tech...

Faux experts abound
Today combines two of my pet peeves: local councils and social media influencers...

What you need to know about EMP weapons
As we sit, possible poised on the verge of a nuclear conflict in the Northern Hemisphere...

8-bit computers are great
Over the past few days I've found myself reading and watching a lot of material that documents the early days of the microcomputer revolution...

Do you remember when...
As a (very effective) cure for insomnia, I've been watching a heap of old black and white movies...