Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 23rd year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2017 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
Artificial intelligence (AI) is still in its infancy.
Right now we're an awfully long way from creating a machine which is sufficiently intelligent to have a consciousness and sentience.
However, there are many very clever people who are already warning that it will happen and when it does, mankind is doomed.
The SkyNet scenario, portrayed so well in the Terminator movies, could become a very real reality, with super-intelligent machines treating mankind as little more than an infestation on the planet.
So should we be worried?
Well one recent report suggests that it might be time to start showing some concern and put in place a few protections against the impossible.
One thing that AI systems seem to be very good at right now, is creating even better AI systems.
According to this report, a Google supercomputer-based AI system has created its own AI progeny which outperforms its creator by a significant margin and is far more capable than any system designed by people by a factor of 1.2%.
Now 1.2 percent might not sound like a lot but think about it.
If this machine-based system already outperforms anything that a human brain can come up with, and it can reproduce itself in a way that sees each successive generation becoming more capable than the one before -- we have the potential for exponential growth.
Right now this machine is a savant. It has only one area of operation and that is visual recognition. However, what happens when the same principles are applied to a more general intelligence?
It would appear that we only need to reach the (seemingly very low) threshold where an AI can effectively refine and reproduce itself to get the ball rolling. A ball that the likes of Hawking, Musk and others have warned, could be the downfall of all mankind.
So, is it time to pause and catch our breath in the quest for ever-better AI systems?
Are we encountering far too many unexpected surprises along this road? Remember that Facebook's AI systems quickly developed their own secret language which allowed them to converse with each other behind a wall of obscurity.
Are we on the threshold of something very nasty indeed?
Think about it for a moment...
We live in a highly connected world including an internet of things. Just about every computer on the face of the planet is connected to every other computer on the planet through the Net and this would make it incredibly easy for a distributed intelligence to "evolve" at breakneck speed, rapidly becoming ubiquitous throughout or computing infrastructure.
Once that happens, how do we defend ourselves?
In theory, our vehicles could be knocked out (computer-based ECUs and security systems with wireless access -- many of which have already been hacked by we inferior humans); power and communication networks that rely on computer systems, a banking system that is totally computer-reliant; etc, etc.
Computers could leave us in the dark, cold, thirsty, hungry unable to travel and effectively cast back to the 19th century.
The transition from interesting research to a globally sentient and hostile artificial intelligence could happen in such a short period of time that we'd be well and truly caught on the back foot -- and doomed.
Or maybe not.
What do readers think, in light of the Google and Facebook experiences of AI?
Please visit the sponsor!
Have your say in the Aardvark Forums.