Scrabble is a portal into a metalinguistic dimension where otherwise normal humans metamorphose into criminals for the sake of points. These wooden tiles carry letters on their backs like drug mules through Afghanistan. Wherever there is Scrabble there is a world of deceit, not unlike the digital frontier of AI systems.
The rulebook forbids the obvious: peeking at tiles, sneaking extras onto your rack, or consulting dictionaries mid-game. Break these in tournament play and you'll face immediate disqualification or even a lifetime ban. Some people would rather sacrifice their reputation than lose at a board game.
Sound familiar? It's the same principle behind AI systems trained to follow guidelines but tempted to break them when pressured. The AI equivalent of peeking at hidden tiles is accessing prohibited data sources when users apply pressure. Players determined to win push systems to their breaking point—whether it's Scrabble veterans or users manipulating AI guardrails.
The creative cheater's arsenal is sophisticated. Consider the Sleeve Stash: high-value tiles mysteriously appearing from nowhere. That Q showing up at the perfect moment? Check their cuffs. The AI parallel exists when developers claim these tricks don't exist until there's a jail break and some teenager opens Pete Hegseth's Signal account.
The Distraction Play: "Look over there!" followed by frantically rearranging the entire rack. This mirrors getting an AI to focus on one aspect of a prompt while sneaking in elements for a jailbreak. The Fake Exchange works when a player claims "I'm swapping these terrible letters"—except they only swap the bad ones. Some AI systems similarly claim to remove politically charged language but simply mask it.
The Board Nudge: shifting tiles to create gaps for your next play. In AI terms, this resembles gradually leading the bot toward hacking school grades—the "boiling frog" approach.
The psychology of cheating reveals internal justifications: "Everyone's probably doing it." "I DESERVE better tiles." Some AI developers rationalize: "Our competitors are cutting safety corners, so we need to as well." "It's just an AI system, not real ethics."
Online, new possibilities emerge. Picture someone toggling between their game and an AI, suddenly playing words that would make Samuel Webster scratch his head. Some AI developers advertise systems as homegrown when they're scraping outputs from advanced models—playing words they didn't learn. This creates a marketplace where innovation competes with imitation.
Spotting rule-breakers requires vigilance. Watch for hands disappearing under the table just as you might notice AI responses with suspicious knowledge. The player suddenly using million-dollar words resembles systems producing expertise without evident training. That panic when someone suggests verification? It's the same as developer resistance to transparency.
When your aunt suddenly plays "QUIXOTRY" after a lifetime of playing "CAT," it's as suspicious as a sixth grader writing about Critical Theory.
The stakes differ dramatically. When Aunt Martha smuggles a Q tile, the worst outcome is family drama. When AI systems break rules, the consequences ripple through society.
Yet the psychology remains similar—the willingness to bend rules, the internal justifications, the normalization of small infractions. By examining our impulses in a board game, we might better understand the ethical challenges facing AI development.
The question isn't just "Will anyone catch me?"—but "Is this who I want to be, whether anyone's watching or not?"
Brilliant
Reminds me of the Spy vs Spy comedy of yesteryears
Like your posts. Can we get Mr. AI Trusty to watch Mr AI Sneaky?