|
Aardvark DailyThe world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk |
Please visit the sponsor! |
There is big money in learning how to use AI and then harnessing that power to boost productivity.
Many people have now integrated AI systems such as ChatGPT, Gemini, Grok or whatever into their daily workflows.
Research that once took days can be done in seconds, reports that used to be onerous and time-consuming are now offloaded to an LLM that will spit out a passable tome in the blink of an eye.
Given the amount of thinking and typing we have to do in an average day, clearly AI is a boon to all.
However, if you really want to make a lot of money, AI may deliver even more opportunities than some would like to admit.
That is because AI is not only good at doing the right thing, it also seems to be very good at doing the wrong thing.
We're already seeing reports that AI has been used to create malware and other tools which can then extract valuable data from unsuspecting victims. Once that data is in the hands of evil little sods (ELS) it becomes trivial to extort money from those victims by threatening to publish sensitive material on the dark web or elsewhere.
AI is the slave of anyone who would use it.
Naturally, those who build and administer AI systems work hard to prevent the "mis"use of their systems so a growing array of guardrails and safeguards are being installed. Unfortunately these are often created reactively rather than proactively -- only after that misuse has occured.
My recent ability to dupe Google Gemini into believing that I had mastered time travel (here is the log of that conversation for those interested in how I did that) shows how vulnerable these LLMs are to misdirection.
It is vulnerabilities like this that allow ELS to coerce LLMs into performing actions that sidestep the inbuilt safeguards.
There was an interesting story on ArsTechnica today which further highlights just how much risk is involved when you give vulnerable AI agents access and trust that everything will go to plan.
I'm pretty sure that the most effective security hacks in the future won't come from ELS who create lines of obfuscated malware that takes over a computer's CPU and transmits valuable data back to then. Instead, it's almost a sure thing that tomorrow's ELS will be highly skilled in prompt hacking and manipulating the AI agents that all companies will become increasingly reliant on.
The great thing about secrely burying commands in otherwise innocuous documents then leaving them on the web for AI agents to discover is that there's not a whole lot of work or risk involved for those who do so. Those boobytrapped documents can hang around forever, until one day they're processed by an LLM in response to a unwitting user's request and those cleverly contrived trojan horse commands instruct the AI to do things it was never supposed to do.
At this stage, and probably for quite some time, any data to which an AI agent has access should be considered insecure. You might as well have it on a public-facing webserver because you can not trust AI to keep it safe.
Sadly, I fear that the rush to "cost savings" and "improved productivity" will result in far too many organisations overlooking the vulnerabilities they're creating and bad things will happen -- possibly to YOUR data.
Let's keep an eye on things. If/when I spot the obvious I'll write yet another "IToldYaSo" column about it.
Carpe Diem folks!
Please visit the sponsor! |
Here is a PERMANENT link to this column
Beware The Alternative Energy Scammers
The Great "Run Your Car On Water" Scam