Bot Whispering
This post is really cool! Besides teaching us how AI is going to revolutionize shopping, Jayshree teaches us about using the bot—and about bot whispering.
Jayshree shows herself to be a master bot whisperer by reflecting on her response when output from the bot “didn’t land well.” I’ve seen people shrug their shoulders and say something like “See? It didn’t answer my question. It never does. I thought this thing was supposed to be intelligent.” Or “The damn thing is hallucinating again. Can’t believe a word it says.”
Instead, Jayshree withholds finding fault with the tool and instead considers the possibility that, in the role of tool user, she may have caused the poor result herself. Have you ever experienced using a tool inappropriately? Did you return it and ask for your money back? I have. I returned a Hohner harmonica because I thought it was defective. After I learned to blow a little harp, I realized I was the defective one.
The “didn’t land well” comment in itself is proof of several competencies in the toolkit of high-performing bot whispers. Reading and writing from the perspective of tools is one. This competence isn’t limited to ChatGPT, but transfer of the skill to AI is not obvious. If you imagine (read) yourself as a hammer, you understand that hammering on a very thin piece of wood is going to produce a hole or a smush in the wood—that’s reading like a hammer. If you reconsider and decide to use a screwdriver or a staple gun or even Elmers glue as an alternative to complete the act, you are writing in the real world.
If you had limitations on your consciousness like those inherent in the algorithms that generate bot output, you’d read very literally, at first especially until you get some context from the backside of the conversation. You’d read “show me pantsuits up to $200” as just that. You would not know that the user really meant “show me pantsuits that are inexpensive but elegant and probably sell for close to $200.”
Anyone who uses the bot, including those gold medal whisperers on Mount Olympus, will make this mistake occasionally when distracted or tired. Ordinary humans who don’t habitually scrutinize language—those not cursed with touches of the grammarian—can be forgiven. Technically, this mistake occurs when the locutionary force of a performative utterance (a command assertion in which words say and mean something phonetically, syntactically, and semantically inside the four corners of the text) doesn’t match the perlocutionary force of the utterance (what the speaker meant the words to convey, the response the speaker wanted). The illocutionary force refers to the response actually made by the other interlocutor.
“I bow deeply before you. So far you may not know whether I am paying obeisance, responding to indigestion, or looking for a wayward contact lens.”1
A mismatch between locutionary force and perlocutionary force isn’t always obvious, that is, most mismatches are off by small amounts, not totally off the wall. I don’t say “Let’s have sushi” when I intend to say “let’s grab some pizza.” The fact that ultra small mismatches can cause bot derailments, waste time, and even generate so-called hallucinations means that astute tool users are highly sensitive to crafting input language that matches the intention of the command, and when output doesn’t “land well,” the culprit often hides in the language of the command, as Jayshree models.
The other point she models in addition to occupying a command orientation as a user and using linguistic sensitivity to mismatches between locutionary and perlocutionary forces, a cognitively demanding behavior calling on well-developed metalinguistic skills, is differentiating between g bots and sp bots. General bots are trained on a broad sampling of digital information widely available. Special bots are trained on carefully defined, limited domains of information. Jayshree comments multiple times that the benefits she derived from a g bot regarding availability and inventory of attire under a price cutoff (not “cheap” but not “expensive” yet with a patina of affluence) would be dramatically expanded if the bot had been trained on company catalogs.
These insights into using bots as tools as a commander, not as a friend or a neighbor, not as a Buddy or a pal, apply across the board from those shopping for shoes or vitamins or skin products or background o the topic of butterflies to those on the board of the companies who manufacture shoes or skin products to the lawyers who defend the companies from law suits and IRS audits to the first grade teachers who prepare future shoe shoppers, board members, lawyers, doctors, entomologists, and teachers.
See the bot as a digital tool and practice using it. Subscribe to Jayshree’s Substack and other writers who sharing their understandings. As educators, you, your colleagues, and your students can benefit from your modeling just as we can all benefit from sharing our insights and experiences as Jayshree is doing.
Green, Mitchell, "Speech Acts", The Stanford Encyclopedia of Philosophy (Fall 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2021/entries/speech-acts/>.