Google
 

Aardvark Daily

The world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

Are most people using AI wrong?

21 April 2026

Like it or not, artificial intelligence (AI) is here and here to stay.

In a few short years it has dramatically changed the way we approach certain tasks and caused massive upheaval in areas of human endeavour that few people would have imagined, as little as a decade ago.

Creative industries are being decimated by AI image, music, prose and video creation.

Voice-over artists are struggling for work as companies such as Eleven Labs now deliver almost flawless narration and voicing capabilities at a lower cost than even the cheapest worker on Fiver.com.

Lawmakers are struggling to keep up with the way AI has outstripped their ability to come up with a framework that ensures adequate protection of intellectual property -- copyright laws now seemingly unprepared for the nuances that have appeared.

There are also growing concerns that the use of LLM chatbots is making people stupider by effectively taking the load off your brain. The human brain is just like a muscle, if you don't use its cognitive powers then they will atrophe and decline, hence the danger.

According to that BBC article, those using ChatGPT showed 55 percent less brain activity than the control group. That's a *significant* and worrying difference and I don't see that other LLMs would be markedly different in the magnitude of their effects.

Sadly, I think the problem is that most people do not understand how to "safely" use AI.

Whether it's vibe-coding a new application, creating a music track or creating a stunning piece of art for personal or commercial application, the usual strategy seems to be to simply type "create xyz" with perhaps a few qualifying descriptive terms to refine what you're expecting.

That risks removing your own creative and cognitive load from the equation and just leaving almost all the work to AI.

In that case, you're not really creating anything -- it's the AI that does the heavy lifting.

From where I sit, this is not a good way to use AI. It's lazy and ultimately, although you may be astounded with the results, you'll not be capable of ending up with something that is any better or significantly different to everyone else that does the same.

Do this often enough and you may even lose a significant amount of the ability to create without the aid of your favourite LLM. That would be a very real loss and a weakness.

I've been using AI quite heavily to assist with planning my recovery from the recent trauma I've had to endure but instead of simply typing "Give me a roadmap to recovery", I've done a lot of "manual" research and then thrown concepts at AI to get its critique of what I've proposed.

In this capacity, AI is quite good at spotting deficiencies and flaws in my strategies, things that I have overlooked or areas where there's clearly a gap in my knowledge.

When such gaps are highlighted, I go back to old-school techniques and do the hard yards, reading the literature and papers involved rather than relying on a perhaps somewhat eroneous AI summary.

In doing these "hard yards" I am repeatedly reminded by the evidence that even the best AI is sometimes really crappy at sticking to the facts when it does an overview or key-points synopsis of a compled document. It may get 90 percent of it right but there is so often that 10 percent where it has either misinterpreted what's been written or, for some unknown reason, decided to pad things out with material of its own invention.

I pitty the fool that relies blindly on the output of current AI systems without checking every statement for veracity. Perhaps this explains why we're seeing so many vibe-coded updates to crucial software such as MS Windows and NVIDIA GPU drivers littered with ridiculous bugs that should never exist.

Right now I believe that the safest and sanest way to use the current generation of AI systems is as something to bounce ideas and plans off -- rather than as something to device and implement those strategies itself. This approach ensures that YOU maintain a keen cognitive edge and use AI as a tool rather than a crutch. Even from an employment perspective, this could give you a keen competive edge once the cognitive abilities of others have been dulled by protracted lack of use.

However, things are changing at such a pace in the AI world that it may not be long before we have totally reliable LLMs whose output can be trusted. Will we then find it pointless to maintain our own cognitive abilities?

As I've written previously, when I was at school there were fears that the slide rule and the pocket calculator would "weaken" our brains because it could mean that we'd never even be able to remember simple "times tables" and the ability to do math by hand. Has that actually happened? Were those fears much ado about nothing?

Or have you also found that the local checkout operator probably couldn't manually calculate your change if the power was out and you were paying by cash?

Does that really matter any more?

Carpe Diem folks!

Please visit the sponsor!
Please visit the sponsor!

Here is a PERMANENT link to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

The EZ Battery Reconditioning scam

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

Recent Columns

It's getting better
After some false-starts, I'm hoping that this week sees me returning to 100 percent duties on your daily dose...

Greed runs amok at YouTube
As you've probably figured out already, things are a bit hit and miss around here at the moment...

Our changing world
The world is changing faster than it ever has before...

Well this sucks
The last three weeks have been the worst of my life...

NZ's energy future is dark
I've already written a column on the potential fallout from surging oil prices and shortfalls in supply of transport fuels...

Donut Lab battery tests, part 3
The third tranche of independent test results on the Donut Lab solid state battery technology has dropped...

So much money to be wasted
Drones are cheap to make but expensive to stop...

This is very concerning
There are reports on the internet that the US government may seek to nationalise Artificial General Intelligence...

Dark days ahead?
The USA has bombed the snot out of Iran and the side-effects of this are that many countries may find themselves facing significant energy shortages...

Petulance forte
At 8:20am yesterday morning there was a knock on the door...