It makes sense on paper - if you want to get better using a technology, focus on improving your technology skill-set(s).
In practice, though, if you want to get better results with AI tools like ChatGPT, you might have to work on skills that have nothing to do with technology.
Here are some examples of issues that I commonly see -
- People don’t have clarity on what they want to do
- People don’t know the difference between a task and a project (collection of tasks)
- People don’t know how to break a project down into actionable steps
- People don’t know how to communicate what they want to do
- People don’t know how to evaluate the output that they get from an LLM
- Etc.
This list could get long and out of hand quickly but I think that this is enough to illustrate the point. None of these things have to do with technology and all of them could contribute to you getting results that you don’t like from AI tools (and blaming the tools).
If you don’t get the output that you want, the first thing that you should ask is: “how could this be my fault?”
If you ask that question and draw a blank, tell the chatbot that the output isn’t what you wanted and ask it why you could change with your approach to get closer to the answer(s) that you want.
Build up your skills and this gets easier.

Stop Blaming the Bot (It’s Not Always ChatGPT’s Fault)
Better AI results start with better human skills, like clarity, communication, and knowing a task from a dumpster fire of half-baked ideas. Our beginner's course and blog can help with that.
Comments