Google
 

Aardvark Daily

The world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

Can you poison AI?

05 Apr 2024

Most of the large AI systems presently in use have been "trained" on data scraped from the internet.

Why?

Well because it's free and it's abundant.

This has, of course, resulted in the occasional copyright issue, especially now that generative AI is spitting out images and videos on-demand. In fact, we can sometimes see a corrupted version of the watermark of the many stock-image websites embazoned on these generated images.

As a result of this, and the fact that an increasing amount of the internet's content is now being created by AI, researchers seem to have taken a different direction in the quest for fresh new datasets for training their systems.

That new direction is you.

You will have likely noticed that virtually all of the major AI models have a public-facing "free" tier of use. Anyone can just rock-up and start interacting with the AI, asking questions and using an iterative approach to refining the results.

Why do you think that is?

Well I asked Google's Gemini and it was open about the fact that it is learning from these interactions. In effect, it is widening the scope, increasing the depth and refining the accuracy of its knowledge through these interactions.

The internet is being replaced by real live wetware as a source of training data.

If you use Gemini, you'll notice it has a thumbs-up and a thumbs-down icon beside each response. This encourages you to rank those responses as good or bad.

When I queried Gemini as to the purpose of those icons, it told me:

"This feedback is used to improve my future responses"

However, it was very quick to add:

"it's important to note that the feedback isn't used in real-time to directly update my responses. Instead, it's used by my developers to improve the model as a whole over time"

I'm not totally sure I believe that because after significant interaction with the AI I noticed that it has started changing its perspective on a few things where I have challenged the accuracy and objectivity of some responses it was giving.

Now of course those changes may be solely personalised to my interactions and the feedback I've given may have zero effect on someone else's results but I'm pretty sure, given that all conversations are stored, this data is mined as a way of refining the AI model being used.

In fact, when I asked whether it did not learn from interactions it told me:

"When you provide information, I can access and process it to expand my knowledge base"

One thing I've learned about AI, and in particular, Gemini, is that it often tells lies or just makes stuff up on the spot -- perhaps as a way of trying to make the system appear smarter than it is. It can however, also let important facts slip if you word your queries cleverly.

So, in the knowledge that it is probably using interactions with users to further-train its AI system, is there potential here for those with evil intent to skew an AI to the point where it becomes a mouthpiece for their own beliefs, ideologies or agendas?

Remember the old saying: if you repeat a lie often enough it becomes the truth.

What if a sizeable group of individuals got together and decided to deliberately mis-inform an AI model so as to further their own interests or agendas? Would this be caught and corrected before things were noticeably distorted?

I certainly expect (or at least I hope) that those building these public-facing systems have taken this into consideration and have put in appropriate safeguards. This is especially important when you look at just how many companies, governments and other critical organisations are planning to use AI systems as a cornerstone of their operations as we move into the future.

Perhaps it's best not to trust AI too much, lest we be tripped up by our own hubris in developing such systems.

Carpe Diem folks!

Please visit the sponsor!
Please visit the sponsor!

PERMALINK to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

The EZ Battery Reconditioning scam

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

Recent Columns

Our changing world
The world is changing faster than it ever has before...

Well this sucks
The last three weeks have been the worst of my life...

NZ's energy future is dark
I've already written a column on the potential fallout from surging oil prices and shortfalls in supply of transport fuels...

Donut Lab battery tests, part 3
The third tranche of independent test results on the Donut Lab solid state battery technology has dropped...

So much money to be wasted
Drones are cheap to make but expensive to stop...

This is very concerning
There are reports on the internet that the US government may seek to nationalise Artificial General Intelligence...

Dark days ahead?
The USA has bombed the snot out of Iran and the side-effects of this are that many countries may find themselves facing significant energy shortages...

Petulance forte
At 8:20am yesterday morning there was a knock on the door...

Wait for the silver lining
The computer hardware scene is pretty bleak right now...

More on Donut Labs solid state battery
Yesterday Donut Labs released the second tranch of independent tests on their allegedly revolutionary solid state battery technology...