Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk |
Please visit the sponsor! |
The doomsayers have been warning that artificial intelligence (AI) may pose an existential threat to mankind for quite some time now.
This week they've repeated their warnings and indications are that the "escape" of AI from our control may be a lot closer than we'd like to believe.
Grab your favourite sedative or tranquiliser and prepare to read the portents of doom currently being uttered by those who are apparently "in the know".
The first story on this subject warns that AI Could Escape Control at Any Moment, according to leading AI scientists.
To be honest, so long as we have a big red physical button labeled "Power Off" I think things will be okay and the narrative of the Terminator movies will not see our species wiped from the face of the planet by armies of killer cyborgs.
In another story published today, OpenAI CEO Sam Altman suggested that a "superintelligent AI" is just a few years away.
The implications of this are manifold and largely very positive.
Previously unsolvable problems could finally be addressed in ways that our own limited abilities have made impossible.
New advances in science, medicine and other technologies may come at a very increased pace, perhaps heralding a very rapid advance in our capablities as a civilisation.
As the article suggests, we may be on the cusp of "The Intelligence Age", a defining point in our evolution that matches or exceeds those ages that have come before, such as the stone age, iron age, industrial age, etc.
A super-powerful AI system could see exponential growth as it is applied to the design and creation of even more intelligent systems in a bootstrapping process that we've only just begun.
My biggest worry is that we become too dependent on AI in the way we've become dependent on so many other technologies. Dependence always carries a cost.
Since we developed low-cost personal transport, people hardly walk anywhere and that's been a huge contributor to obesity and age-related diseases. Our muscles, if not used regularly, atrophy and wither.
Unfortunately for us, the brain is very much like those muscles. If we no longer have to think for ourselves, do we run the risk that our own IQ will gradually fall over time?
One could argue that when we have technology that can replace our own feeble muscles or intelligence it would be foolish not to take full advantage of that tech and that it doesn't matter if our own inate abilities are lost -- because our technology will do the job. "Why have a dog and bark yourself?" one might ask.
We already have a world where computers have replaced much of our thinking -- just watch a checkout operator try to calculate change in their heads if they try and I wonder how many people could actually do long division by hand if they had to. Why bother -- when we have point of sale terminals, calculators and all manner of tech that will perform those tasks for us?
Why bother being fit enough to walk to the supermarket and then walk home again with a backpack filled with food when you can just jump in your car and complete the mission in a fraction the time?
Well I wonder how we'll get on if some kind of catastrophic event takes out our technology?
What if our tech is rendered useless by war, CME, or some other disaster?
Our only backup is ourselves and our ability to function without all our technology-based crutches.
Do we really need that backup?
Carpe Diem folks!
Please visit the sponsor! |
Here is a PERMNENT link to this column
Beware The Alternative Energy Scammers
The Great "Run Your Car On Water" Scam