![]() |
Aardvark DailyThe world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk |
![]() Please visit the sponsor! |
Yes, it's AI (artificial intelligence) time again.
Right now, large language models (LLMs) are forms of AI that require huge amounts of computing resources. In fact, the estimates are that if AI adoption continues at its present rate, a significant amount of the energy mankind presently uses will be going into the systems on which these LLMs run.
This tech also requires the use of expensive hardware such as AI processors or repurposed GPUs, due to the extraordinary number of calculations that must be performed at extremely high speed.
However, all of this may be about to change, at least a little bit.
On of the most common and numerically burdensome parts of an LLM's processing is matrix multiplication (MatMul).
A paper published by a group of researchers claims to have come up with a way to produce LLMs without the burden of MatMul operations and, if this is so, the cost, energy requirements and speed of LLMs could be about to undergo a significant change.
According to the paper, training speeds can be hiked by over 25 percent and memory requirements, after further optimisation, could be reduced by a factor of 10.
In systems that are as large as those usually required for commercial-grade LLMs, this can represent a huge saving on hardware costs to deliver the same level of performance as MatMul-based systems.
Also, because the very complex and processor-intensive MatMul processing is avoided, researchers have been able to build a custom FPGA-based accelerator that processes billion-parameter scale models using just 13W of power, thus demonstrating the potential for "brain-like efficiency" in future lightweight LLMs.
What are the implications of all this?
Well I'm no LLM/AI specialist and I have to concede that most of the contents of this paper soar miles over my head and level of understanding but it would appear that this breakthrough could lead to faster, cheaper and more energy-efficient AI processing and that will result in more AI everywhere.
Is that a good thing?
I don't know.
Because I'm now in the second year of my eighth decade on the planet, part of me is averse to the level of change that AI is bringing to our world. Fortunately however, I am aware of this age-related bias and thus do not discount the value of this new tech.
AI is just a tool (at this stage) so whether it's a good thing or a bad thing will be entirely determined by how we actually use it.
If we recklessly apply it to everything we do then some of the outcomes will be far less than optimal. However, if we're prudent and measured in our use of AI, the benefits could be huge and may produce see unprecedented progress in the areas of science and technology during the years ahead.
For those who might want to play around with AI at an experimenter's level then it seems that a new NPU (Neural Processing Unit) has been announced for the good old Raspberry Pi. The Hailo AI kit promises to bring AI capabilities to the humbe RPi for a suitably low price and is something I may pick up so that I can become a little more familiar with our soon-to-be computer-based overlords.
Carpe Diem folks!
![]() Please visit the sponsor! |
Beware The Alternative Energy Scammers
The Great "Run Your Car On Water" Scam