Be honest, did you just look up “tapper” to see if I insulted you? To be fair, I probably would have.
“Tapper” is a reference to the tappers and listeners experiment conducted at Stanford in 1990. More on that in a second.
The TL;DR/”why would you care about this”/”what does this have to do with AI” is that if you aren’t consistently getting the output that you expect from AI tools like ChatGPT, Gemini, Claude, Gamma, etc, the reason might be because you are keeping too much to yourself (aka - not giving enough context).
Back to the tapper thing, you can look up “tappers and listeners Stanford” if you want all the details. It was a way to demonstrate the curse of knowledge - when someone who knows something can no longer see things through the eyes of someone who does not know the thing.
People participating in the experiment were put into two groups - tappers and listeners. The tappers were given a short list of well-known songs. They could choose a song and, wait for it, tap that song to the listeners to see if the listeners could correctly guess what song it was. Easy enough. Before they started, they guessed what percentage of the songs would be guessed correctly. What would your guess be? (theirs was 50%)
In the end, the listeners guessed correctly only 2.5% of the time.
One of the main reasons why the guess rate was so low was because the tappers could hear the song in their heads and that made the tapping seem really simple and obvious to guess. The listeners just heard noise (I’m assuming since I wasn’t there).
I think that this is a powerful idea when it comes to prompting AI tools. There is so much that we assume “is known” so we don’t add it to our prompts and, when we don’t get the output that we want, we blame the tool(s).
My advice is to think about where you can explicitly say what you think is implicit. It won’t work every time but I think that you’ll be surprised at how often just this change can 10X your output.