Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
I recall back in the late 1980s and early 1990s that artificial intelligence was, at least for a short time, touted as "the next big thing".
"Expert Systems" based on AI cores would, we were told, change the world.
Well that never quite happened and although there were some attempts to build systems that not only provided intelligent access to data and which self-learning, the hardware just wasn't there to support the incredible amount of processing this type of tech really required.
Jump forward 30 years or so and we're again being told that AI is the future... but this time we've got a lot more horsepower to play with so maybe... just maybe, it is.
A surprising number of vendors are now touting AI products and you can buy SDKs to design and build your own AI system if you really want to.
Most of these systems do require a fair bit of processor power however, which has limited their application to server-based stuff.
However, that situation is changing rapidly and numerous vendors are now rolling out silicon that is specifically designed for AI applications.
Perhaps one of the most high-profile bits of AI silicon is Intel's Nervana product.
Nervana is a neural-net on a chip which is designed to be very scalable by way of simply hooking multiple units together. Inter-chip connectivity is a strong-point of the family and Intel are claiming that their ability to deliver "deep learning" performance will increase by two orders of magnitude over the next couple of years.
Meanwhile, researchers at the McCormick School of Engineering have developed a device called a memtransistor which effectively mimics the operation of a neuron. This bit of silicon contains but memory and processing functions with up to seven connections (synapses).
When you stop and think about it, simply building a silicon "brain" using electronic analogs of the components we find in our own brains makes a lot of sense.
Now that we can fit billions of transistors on a single chip, building neural networks out of silicon could enable the creation of systems with the theoretical potential to be as intelligent as ourselves -- and once a bootloader was created, they'd be self-learning and self-programming, just like us.
Conventional processors running neural-network code are an incredibly inefficient way to create AI and it's not really practical to produce a system using traditional processors that will deliver the same levels of parallelism that a true neural net can provide.
I expect to see a lot of work being done in silicon to produce future generations of AI technologies and it is also quite possible that we'll see that two orders of magnitude performance increase that Intel is predicting. Could this be the new equivalent of Moore's Law? AI deep-learning ability will grow by an order of magnitude every year?
Now there's a scary thought!
Please visit the sponsor!
Have your say in the Aardvark Forums.