Google
 

Aardvark Daily

New Zealand's longest-running online daily news and commentary publication, now in its 23rd year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2017 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

AI... Nek Minute...

5 December 2017

Artificial intelligence (AI) is still in its infancy.

Right now we're an awfully long way from creating a machine which is sufficiently intelligent to have a consciousness and sentience.

However, there are many very clever people who are already warning that it will happen and when it does, mankind is doomed.

The SkyNet scenario, portrayed so well in the Terminator movies, could become a very real reality, with super-intelligent machines treating mankind as little more than an infestation on the planet.

So should we be worried?

Well one recent report suggests that it might be time to start showing some concern and put in place a few protections against the impossible.

One thing that AI systems seem to be very good at right now, is creating even better AI systems.

According to this report, a Google supercomputer-based AI system has created its own AI progeny which outperforms its creator by a significant margin and is far more capable than any system designed by people by a factor of 1.2%.

Now 1.2 percent might not sound like a lot but think about it.

If this machine-based system already outperforms anything that a human brain can come up with, and it can reproduce itself in a way that sees each successive generation becoming more capable than the one before -- we have the potential for exponential growth.

Right now this machine is a savant. It has only one area of operation and that is visual recognition. However, what happens when the same principles are applied to a more general intelligence?

It would appear that we only need to reach the (seemingly very low) threshold where an AI can effectively refine and reproduce itself to get the ball rolling. A ball that the likes of Hawking, Musk and others have warned, could be the downfall of all mankind.

So, is it time to pause and catch our breath in the quest for ever-better AI systems?

Are we encountering far too many unexpected surprises along this road? Remember that Facebook's AI systems quickly developed their own secret language which allowed them to converse with each other behind a wall of obscurity.

Are we on the threshold of something very nasty indeed?

Think about it for a moment...

We live in a highly connected world including an internet of things. Just about every computer on the face of the planet is connected to every other computer on the planet through the Net and this would make it incredibly easy for a distributed intelligence to "evolve" at breakneck speed, rapidly becoming ubiquitous throughout or computing infrastructure.

Once that happens, how do we defend ourselves?

In theory, our vehicles could be knocked out (computer-based ECUs and security systems with wireless access -- many of which have already been hacked by we inferior humans); power and communication networks that rely on computer systems, a banking system that is totally computer-reliant; etc, etc.

Computers could leave us in the dark, cold, thirsty, hungry unable to travel and effectively cast back to the 19th century.

The transition from interesting research to a globally sentient and hostile artificial intelligence could happen in such a short period of time that we'd be well and truly caught on the back foot -- and doomed.

Or maybe not.

What do readers think, in light of the Google and Facebook experiences of AI?

Please visit the sponsor!
Please visit the sponsor!

Have your say in the Aardvark Forums.

PERMALINK to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

The Missile Man The Missile Man book

Recent Columns

IoT meet LoRa
The Internet of Things (IoT) is still predicted to be "big"...

Patreon sucks (now)
Time was that a small but dedicated band of content creators committed to making regular YouTube videos which attracted a regular audience...

Bitcoin = Monopoly money?
Forget about the Bitcoin bubble, that's the least of the currency's problems right now...

Technology just rolls on
This week we had a supermoon...

Risking lives to save what?
When a group of miners died in the Pike River tragedy some years ago, the face of "health & safety" in New Zealand changed forever...

AI... Nek Minute...
Artificial intelligence (AI) is still in its infancy...

Snake oil on the rise
Most of the people I know are relatively smart and intelligent folk who you'd expect to be aware of and avoid the silliness of snake-oil medical treatments...

Do smart people live longer?
If you have an above-average IQ then you're more likely to have an above-average lifespan...

SkyCo Psycho
It's sad to watch a company writhing in pain as it enters its death throes...

The Bitcoin bubble?
Don't you wish you'd got into Bitcoin right at the start?...

How Kodi will end online piracy
I like a good movie as much as the next man (or woman)...