I like to use AI tools like Gemini, ChatGPT, etc. for tasks that I know how to do well. Really well.
One of the main reasons for this is because I can look through the output and see errors quickly. I can correct them with a follow-up prompt or take the output as-is, and change the errors manually.
Unfortunately, a lot of the people that I talk to and work with use these tools for things that they don’t know well (or don’t know at all). A BIG problem with that approach is that you might miss the hallucinations from the models. They don’t do it as much as they used to, but they definitely still do it.
The Google DeepMind team recently referred to misinformation in the output as “context poisoning”. They added that this can often take a very long time to undo, and the model can become fixated on achieving impossible or irrelevant goals.
This happens a lot with agents - AI tools that use multiple models to automate many tasks. If something in the middle is “poisoned”, the entire workflow could suffer. Worse, if you don’t know the topic well enough to realize that it is wrong, that could be REALLY bad.
So what can you do about this?
Experiment with different prompts and get to a point where you can enter the shortest possible prompts that provide enough context. Also, cross-reference output on important tasks to make sure things weren’t poisoned.
If the chance of your output being poisoned is unacceptable. Don’t use an AI tool.
(Cause humans never make mistakes…)
Comments