Google
 

Aardvark Daily

The world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

On the edge of disaster?

30 Mar 2023

It's no surprise... AI (artificial intelligence) is in the news again for all the wrong reasons.

According to this BBC story, a bunch of industry "luminaries" have signed a letter calling for a halt in AI development until such time as the risks are better established and protections have been put in place.

The list of signatories includes such recognisable names as Elon Musk and Steve Wozniak, something which the media thinks adds an increased level of credibility to the warnings.

If Stephen Hawking were still alive he'd probably have signed the letter as well since he issued his own warning almost a decade ago.

So how real are the risks and how sage are the warnings?

It certainly appears as if, with the release of ChatGPT, Dall-e and other impressively capable AI systems that artificial intelligence is making gargantuan leaps in its power and effectiveness.

However, experts say that we are still a long way from creating an AI with sentience.

The real problem is that an AI does not need to be sentient to be a danger to mankind.

The power of AI is that it seems capable of constructing non-obvious solutions to complex problems. This makes it a powerful tool when developing technology, especially medical ones.

The worry however, is that it's this ability for AI to "think outside the box", combined with our own propensity to make mistakes that could be mankind's undoing.

Our ability to get things wrong when programming or working with computers is demonstrated almost every day... by way of zero-day exploits allowing bad actors to steal data or inject malware into supposedly secure systems. The reality is that humans are imperfect and we make mistakes.

Imagine therefore, if the rulesets used to control the actions of an AI system have similar flaws.

We may think we've built in sufficience safeguards to protect ourselves from a system that constantly thinks "outside the box" but what happens when that system decides that humans are actually part of the problem and that the solution involves in elimnating them?

Yeah, on a large scale this is real dystopian scifi stuff but on a smaller scale it is probaby already happening.

What do I mean?

Well stories like this provide clear evidence that AI is going to allow us to dispense with humans in many everyday roles. Sure, it's not "Terminator" stuff but it if you're one of the countless number whose job is suddenly replaced by AI you are not going to be in a happy place.

How long do you think it will be before AI-based systems are rolled out by governments and corporations -- so as to minimize costs and maximise efficiencies?

When it's an AI system that decides whether you qualify for life-saving surgery or some form of state-funded assistance then lives really could be on the line. If someone forgot to include a rule that allows the AI to save money by simply saying "no" when it really ought to say "yes" then we have a problem.

Let's not forget that AI systems must be trained and that training relies on its mistakes being corrected rather than ignored. Therein also lies a huge problem.

Once it has reached a certain level of compentency, the temptation will be to dispense with the "correction" aspect of the training. "Correction" will require a human to look at the output of the AI and compare it to the inputs so as to establish the correctness of the conclusion reached. When you're trying to save money the temptation will always be to say "near enough" and under-resource or completely eliminate that corrective step.

Once any corrective feedback is gone, the AI then becomes capable of learning bad stuff and that bad stuff will actually proliferate as if it was a valid output. By the time this undesired output is recognised, untold harm may have already been done.

AI systems may turn out to be fantastic green fields for bad actors. Imagine how many "social engineering" strategies might prove effective with a comparatively naive AI system.

We've already seen examples where ChatGPT's inbuilt protections have been sidestepped by carefully crafted queries. This is somewhat akin to the zero-day exploits we're familiar with in computer code.

To be totally honest however, I'm not really sure that our AI systems are as great as some claim them to be. Once you get over the "wow" factor of a seemingly intelligent chatbot, the reality starts to bleed through.

When asked to create computer code, ChatGPT often just regurgitates code it found on Github that solves the same problem as you have defined. When asked to create artwork it becomes pretty clear that some of Dall-E's creations are collages of existing works that it has copied and stored in its database.

Is there really any original thought going on, or are today's AI systems just machines that are good at re-organising existing information based on carefully parsed queries?

Should we take heed of the warnings that are being given by industry figure-heads?

Well first we need to recognise that neither Elon Musk nor Steve Wozniak are AI gurus. Musk is famous for selling snake oil and smashing unbreakable windows on cybertrucks while Woz developed the hardware for Apple's first computers many decades ago. Not exactly prime qualifications in this case.

However, I still believe we should heed their warnings... not because AI itself poses a threat to mankind (at this point in its development) but because we basically an unreliable species that has proven time and time again that we make silly mistakes with complex technologies.

What do you think? To the forums with you!

Carpe Diem folks!

Please visit the sponsor!
Please visit the sponsor!

Have your say in the Aardvark Forums.

PERMALINK to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

The EZ Battery Reconditioning scam

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

Recent Columns

Skills lost forever
While watching a new hybrid EV glide silently past me on the weekend...

The apocalypse above our heads
A regular reader sent me the link to a fascinating article I'll share with you today...

My computer has $18 million worth of RAM in it
I recall that when I bought my first RAM chip they were expensive, very, very expensive...

Melatonin, nobody was expecting this
Do you have trouble sleeping?...

A new waste mountain?
Remember the days when most battery-powered things would send you broke?...

The UK gets worse
Guy Fawkes Day is now just a couple of sleeps away and I'm pretty sure UK politicians won't he sleeping comfortably right now...

The AI invasion has begun
AI is coming for YOU... well it's coming for your job...

Death, taxes and advertising?
Benjamin Franklin once said "nothing is certain except death and taxes"...

How did we cope? (an ode to 8-bits)
My first computer had an 8-bit processor...

Windows is dangerous and harmful, YouTube says so
YouTube has some pretty strict community guidelines...