Google
 

Aardvark Daily

New Zealand's longest-running online daily news and commentary publication, now in its 24th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2018 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

AI... Nek Minute...

5 December 2017

Artificial intelligence (AI) is still in its infancy.

Right now we're an awfully long way from creating a machine which is sufficiently intelligent to have a consciousness and sentience.

However, there are many very clever people who are already warning that it will happen and when it does, mankind is doomed.

The SkyNet scenario, portrayed so well in the Terminator movies, could become a very real reality, with super-intelligent machines treating mankind as little more than an infestation on the planet.

So should we be worried?

Well one recent report suggests that it might be time to start showing some concern and put in place a few protections against the impossible.

One thing that AI systems seem to be very good at right now, is creating even better AI systems.

According to this report, a Google supercomputer-based AI system has created its own AI progeny which outperforms its creator by a significant margin and is far more capable than any system designed by people by a factor of 1.2%.

Now 1.2 percent might not sound like a lot but think about it.

If this machine-based system already outperforms anything that a human brain can come up with, and it can reproduce itself in a way that sees each successive generation becoming more capable than the one before -- we have the potential for exponential growth.

Right now this machine is a savant. It has only one area of operation and that is visual recognition. However, what happens when the same principles are applied to a more general intelligence?

It would appear that we only need to reach the (seemingly very low) threshold where an AI can effectively refine and reproduce itself to get the ball rolling. A ball that the likes of Hawking, Musk and others have warned, could be the downfall of all mankind.

So, is it time to pause and catch our breath in the quest for ever-better AI systems?

Are we encountering far too many unexpected surprises along this road? Remember that Facebook's AI systems quickly developed their own secret language which allowed them to converse with each other behind a wall of obscurity.

Are we on the threshold of something very nasty indeed?

Think about it for a moment...

We live in a highly connected world including an internet of things. Just about every computer on the face of the planet is connected to every other computer on the planet through the Net and this would make it incredibly easy for a distributed intelligence to "evolve" at breakneck speed, rapidly becoming ubiquitous throughout or computing infrastructure.

Once that happens, how do we defend ourselves?

In theory, our vehicles could be knocked out (computer-based ECUs and security systems with wireless access -- many of which have already been hacked by we inferior humans); power and communication networks that rely on computer systems, a banking system that is totally computer-reliant; etc, etc.

Computers could leave us in the dark, cold, thirsty, hungry unable to travel and effectively cast back to the 19th century.

The transition from interesting research to a globally sentient and hostile artificial intelligence could happen in such a short period of time that we'd be well and truly caught on the back foot -- and doomed.

Or maybe not.

What do readers think, in light of the Google and Facebook experiences of AI?

Please visit the sponsor!
Please visit the sponsor!

Have your say in the Aardvark Forums.

PERMALINK to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

Recent Columns

Will we be spoilt by choice?
On the day when it was announced that Google has been hit with a $5bn fine for limiting the search choices of android users, I ask the question:...

Two short decades
As I get older, I find myself increasingly looking back at the changes that have taken place during my life (to date)...

What is wrong with Elon Musk?
Everyone knows of Elon Musk...

Are we being LED astray?
LED lights are the future... right?...

Court: Kodi streams OK, caching not
Some time ago, SkyTV took a case to court claiming that preloaded Kodi boxes were illegal and breached copyright...

Proof that politicians are idiots
I've written several columns on this whole issue of the "Amazon Tax" and in each one I have suggested that the government is dreaming if it thinks overseas companies will play ball...

DIY tech projects... still a thing?
At present I'm scoping several tech projects for featuring in my YouTube channels and I'm hoping that the spirit of tech-DIY is still alive and well...

Blurring the line
One advantage of being a bit under the weather is that it has given me time to get more familiar with the new editing and compositing software I'm recently purchased...

A matter of taste
One of the least-mentioned but most annoying aspects of having Parkinson's is that you lose your sense of smell...

Dotcom, a victim of our justice system?
The Court of Appeals has thrown out an attempt by Kim Dotcom to overturn a lower-court decision that he be available for extradition to the USA on charges of copyright infringement, fraud and such...

What did I miss?
Woohoo... I finally got a proper night's sleep! ...