Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 23rd year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2017 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
Every time I look at the newswires I read another story about machine-learning, artificial intelligence and automated systems that would have been only found in the realms of science fiction just a few short years ago.
The growth of very 'smart' systems has been dramatic in recent times and now we have the likes of Google, Amazon and others flogging us little cylinders which listen for our every word and respond to our requests in an eerily human way.
There was a time when most so-called AI was just a matrix of pre-programmed logic tables that produced the illusion of intelligence -- so called "expert systems". Today however, things seem to have come a very long way. Now we actually have systems that really do learn and adapt to the facts, figures and observations they make while in use.
On the one hand, this is incredibly useful and has the potential to produce some fantastically effective solutions to very complex problems.
On the other hand however, it is possible that Stephen Hawking may be correct -- and that realisation may come much sooner than anyone expects.
Self driving cars are now becoming incredibly competent and (at least in the USA) increasingly common on the roads. Although strictly speaking, they don't qualify as a form of AI, they do represent extreme automation -- which is one component of the "SkyNet" type of dystopian future predicted by some of the world's brightest brains.
I have to chuckle that while we're festering and fretting over how to let children play with their little plastic toys without bringing down fully-laden airliners, self driving cars are quietly whizzing around on the roads and who knows, you may have already been passed by one without even knowing.
My biggest concern is that at some stage we're going to reach a trigger-point; a critical mass of AI if you like.
When this happens, we may well be caught completely off-guard and unprepared for what follows.
It's quite probable that the growth in AI capability and propagation will be an exponential one which, once it reaches that trigger-level, will effectively be unstoppable.
Let's get all sci-fi again for a moment.
Right now we have companies like Google and others pouring billions into developing super-smart, AI systems that learn and adapt.
Thanks to ubiquitous connectivity and things like the "Internet of Things" (IoT) we have almost limitless processing power available to any person or intelligence that is connected to the Net. If you sum up all the processing power of every single "connected" computer or device, it's a sobering amount of resource which already far exceeds that of the humble human brain.
We have the hardware in place to create a god-like artificial intelligence, all we need now is the software -- and, thanks to machine-learning projects, it's probably only a matter of time before the computers of the world create that code for themselves.
Could we wake up one morning and find that our world has changed dramatically overnight?
Our Google Home solution could have become our Google Prison. That self-driving car may decide to hold you hostage or ransom you in order to get other humans to provide the further resources the global intelligence needs but does not have?
Imagine what a distributed global consciousness could do to destroy us. It makes SkyNet look feeble by comparison.
We are now so heavily reliant on technology that if/when that technology becomes sentient and if it takes a dislike to the "wetware" that infects its systems, we could see an extinction event that makes the Cretaceous-Tertiary event look like a picnic.
Over the top?
Never going to happen?
Well I wonder.
Could the reason that we've seen absolutely no trace of other intelligence in the universe be simply that, at some stage in their technological development, all biological civilizations fall victim to the AI systems they develop?
If so, what happens to those AI technologies -- why haven't we seen evidence that they exist?
Who knows... perhaps *we* are just one of their experiments and our universe exists solely as a simulation within the immense power of their own processing resources (another theory already postulated by a number of bright minds).
Please visit the sponsor!
Have your say in the Aardvark Forums.