Playing the Imitation Game: Lessons from Alan Turing
If you’re a casual student of AI today, you probably know Alan Turing best as the tragic hero of the film The Imitation Game—the brilliant, socially awkward codebreaker who shortened World War II by years, saved millions of lives, then was persecuted by his own government for being gay and died by his own hand at 41, at least that’s the official theory the movie runs with. That’s the mythical Turing: martyr, genius, outcast. It’s not wrong, exactly.
I’ve been doing some close reading from and about Turing the past several months well beyond his five core papers (1936, 1948, 1950, 1952, 1954) and have gained some insights to help me better understand the ambience of this bot-plagued world in which we languish—or if you prefer, this bot-blessed world we enjoy. Same world, different view. Since Fall of 2022, I’ve read, reread, and (re)reread Turing’s pregnant 1950 bombshell paper of imitation game fame. Can machines think? Is that the right question? If you haven’t read it, do.
The Imitation Game (2014), starring Benedict Cumberbatch as Turing and directed by Morten Tyldum, became a cultural phenomenon and, predictably, a lightning rod for controversy. The film earned eight Academy Award nominations, including Best Picture, Best Director, Best Actor for Cumberbatch, and Best Supporting Actress for Keira Knightley, ultimately winning the Oscar for Best Adapted Screenplay.
I remember walking from the Metroplex to the car feeling depressed after seeing it. I probably shouldn’t have eaten a full tub of popcorn. The film received generally positive reviews with a strong Rotten Tomatoes score praising its performances and emotional accessibility. I wasn’t alone in leaving a theater moved by the story. “I didn’t know that. It’s a true story, you know?” people were saying.
Not being a historian, I wasn’t too concerned about the film’s poetic license. Looking back, I understand that the public reception created precisely the problem that frustrates historians. The film’s marketing heavily emphasized it was telling a true story, even though it wasn’t, at least not completely. It was telling the story the public wanted to hear.
Born June 23, 1912, dead June 7, 1954. In those forty-two years, particularly in the concentrated burst between age 23 and his death, Turing wasn’t your run-of-the-mill prolific scientist publishing in several existing fields—he created whole fields of study. And then, between projects, he’d write to his mother.
The Machine That Computes Everything
At 23, in 1936, Turing published “On Computable Numbers.” He was working on an abstract mathematical problem, but to solve it, he invented something concrete: an imaginary machine, absurdly simple. He invented machine computation.
Picture a strip of paper tape divided into squares. A reading head scans one square at a time, following rules: read a symbol, maybe write a symbol, maybe not, move left or right, repeat. The key insight: the machine can READ what it’s already written. This means the tape becomes memory and patterns can build on themselves.
Start with 1, 1 on the tape. Rules tell the machine to read the last two numbers, add them, and then write the result. The tape evolves: 1,1, 2, 3, 5, 8, 13 ,21…
That’s the Fibonacci sequence, the mysterious sequence that generates the expanding spiral pattern in nautilus shells, the ever-growing number of sunflower seeds moving from the center to the perimeter of a flower, the blossoming shape of pinecones. Simple rules, repeated over and over, using previous results to generate the next. Each output becomes input for what follows. Patterns amplifying themselves.
Turing proved this stupid-simple mechanism—reading, writing, moving—could perform any computation that could be broken into step-by-step rules. Addition, multiplication, anything. And then he described a Universal Machine: one machine that could simulate any other just by reading different instructions.
Software and hardware, separated for the first time
Then, from 1945 to 1948, he worked on actually building the computer he’d theorized, first at the National Physical Laboratory, then at Manchester.
When Machines Pretend to Be Human
In 1950, Turing published “Computing Machinery and Intelligence,” opening with a question that seemed simple: “Can machines think?” Then he immediately declared the question absurd. What does it even mean to think?
Think about what he’d proven in 1936. A Turing machine follows rules mechanically: read a symbol, write a symbol, move, change state, repeat. It’s primitive. It’s the opposite of thinking. There’s no understanding, no meaning, no consciousness—just symbols manipulating symbols, patterns building on patterns. The Fibonacci machine doesn’t know it’s generating a sequence that appears in seashells. It just follows instructions. Computation, Turing understood, is mechanical replication. Sophisticated, yes. But fundamentally mindless.
And yet.
What if a machine could imitate thinking so convincingly that people couldn’t tell the difference? And how could one test for it? Turing knew intuitively that humans think with and through language. We speak, we write, we communicate—and from words, we infer intelligence. If a machine could produce patterns of language indistinguishable by real humans from legitimate human uses of language, that machine would pass the imitation test.
So he proposed a thought experiment. His game wasn’t about “Can a machine write an acceptable essay?” or “Can a machine design a space suit?” The proof was not to be found in a work product but in a live performance. Imitating a human in an authentic social setting is categorically different than, say, identifying the antecedents of ambiguous pronouns or winning a game of chess. This performance sort of proof is more like being an active partner in a conversation or discussion, that is, being a learner, not producing an artifact. Can a machine imitate a human in ordinary live human talk?
The Original Game: A Question of Gender
Picture three people, separated into different rooms. One is a man (let’s call him A). One is a woman (call her B). The third is a judge (C). They communicate only through written messages, i.e., typewritten notes passed between rooms or sent via teleprinter. The judge can’t see them, can’t hear their voices, can’t observe their bodies. The judge’s task: determine which one is the man and which is the woman, based solely on their written responses.
Under ordinary circumstances, this decision wouldn’t seem so difficult. Person A (the man), however, is consciously trying to deceive the judge. He wants the judge to misidentify him as the woman. He might write things like “I have long hair” or otherwise try to mimic what he imagines a woman would say. Meanwhile, person B (the woman) is trying to help the judge, trying to prove she’s actually the woman.
Can a man successfully imitate a woman through text alone? Can he perform femininity convincingly enough to fool someone who can’t see him? Sound familiar? Can a student successfully imitate a learner through text alone?
Now, Turing says, replace A (the man) with a machine. Keep B (the actual woman) and C (the judge). The machine’s goal: convince the judge it’s the woman. ChatGPT’s goal: Convince the teacher it’s a learner. The question becomes: Can a machine imitate a human—specifically, a human woman in the case Turing posed—through language alone?
The Drama of Imitation
Giuseppe Longo (2019), an Italian researcher who published a letter to Turing decades after his death, saw what was really happening here: “You are living the drama of a possible imitation. I say drama, because, in my opinion, you already know that the police might, at any moment, ask you the same question: but, are you a man or a woman?” This wasn’t a thought experiment. This was Turing’s life.
He was a gay man in 1950s Britain, where homosexuality was criminalized. Every day, he performed. He imitated heterosexual masculinity. He learned which responses would pass inspection, which behaviors would avoid suspicion. He knew intimately what it meant to be interrogated about your identity, to have someone try to detect whether you were performing a role or being authentic—Turnitin on steroids.
The imitation game is about how we know when we’re being deceived and how we successfully operate as deceivers.. It’s about whether human identity is real or something performed. It’s about whether the surface—the words, the behaviors, the signals we send—is enough, or whether there’s some ineffable essence underneath that can be detected. It’s about whether our words are all there is to us.
When Turing asks “Can a machine convince you it’s a woman through conversation?” he’s also asking: How do you know anyone is what they claim to be? What makes gender real—performance or essence? What makes humanity real—the roles we play or the bodies we occupy?
And he already knew the answer from his own life. Society judges by performance. The police don’t care about your inner experience. They care whether you conform to the expected patterns. Whether you can successfully imitate what they consider normal.
The Morphogenesis Work: How Living Things Structure and Restructure Themselves
The title of Turing’s 1952 paper “The Chemical Basis of Morphogenesis” bears the same ironic, tragic ambiguity that the imitation game suggests in relation to his real life. He had managed to successfully imitate a person he wasn’t interested in being for his entire life. His conviction on charges of sexual deviancy resulted in a sentence to undergo a radically new kind morphogenesis in his own body through the introduction of chemicals. Morphogenesis means what it appears to mean, that is, the beginning of shapes and their ongoing development, in this case the development of a man or a woman body.
Imagine you have a ring of cells, initially all identical. Inside each cell are two chemicals—call them X and Y. X promotes its own production (it’s an activator). Y inhibits X’s production (it’s an inhibitor). And both chemicals can diffuse from cell to neighboring cell, but Y diffuses faster than X. That’s it. Those are the only rules. No design, no plan, no DNA telling cells where to be or what to become. Just local chemistry, happening everywhere simultaneously. Now introduce a chemical designed to change all that.
What Turing showed mathematically is that under ordinary circumstances this system becomes unstable in fascinating ways. A tiny random fluctuation—maybe one cell has slightly more X than its neighbors—gets amplified. X promotes itself, so it grows. But it also promotes Y, which spreads faster. The Y from this cell inhibits X in neighboring cells. Soon you have peaks of X concentration separated by valleys. A pattern emerges: stripes on zebras, spots on cows, spirals on sea shells. Depending on the size of the ring, the diffusion rates, and the reaction speeds, you get different wavelengths—different numbers of stripes or spots.
Turing worked out the mathematics for rings, spheres, cylinders. He showed how you could get Fibonacci spirals in sunflowers, patterns on seashells, segmentation of embryos, arms, legs, eyeballs, etc, from an embryo. All from chemicals that activate, inhibit, and diffuse. No program. No homunculus. No predetermined plan.
The Turing machine was all about programs: explicit instructions, step-by-step procedures, algorithms. Software controlling hardware. But in morphogenesis, in nature, he was showing the opposite: complex structure emerging from physics, from the continuous dynamics of chemical interactions, shaping the emergence of new structures, embryos developing skin, bones, eyes, etc., with no discrete instructions at all.
In a letter that might have accompanied his 1952 paper, Turing might have written to his mother (I’m speculating here based on his style in other letters): “I’m working on how a spherical embryo knows to turn into something with a definite shape—a starfish with five arms, for instance. It turns out you don’t need any ‘knowing’ at all. Just chemicals reacting and spreading at different speeds. Rather like a puddle of oil making rainbow patterns, but much more organized.”
The Tragic Threading
By 1952, when the morphogenesis paper was published, Turing was already under surveillance. That same year, he was prosecuted for gross indecency—the crime of being homosexual. Given a choice between imprisonment and chemical castration through synthetic estrogen injections, he chose the latter.
The estrogen was meant to cure him, to chemically reprogram his desires, to force his body into conformity with what society demanded. He was being treated like a machine. It caused physical changes—gynecomastia, impotence, other effects he found humiliating. Here was a man who had just demonstrated mathematically that you don’t need a program to live in nature, that form emerges naturally from natural processes, being subjected to crude chemical manipulation designed to override his morphogenesis.
He died on June 7, 1954. An apple beside his bed, partially eaten, laced with cyanide. The official verdict was suicide. His mother insisted it was an accident—that he’d been carelessly experimenting with electroplating and had contaminated the apple. The ambiguity remains.
Between 1936 and 1954, in eighteen years of productive work, Alan Turing laid the theoretical foundations for the computer, helped win a world war, proposed a framework for artificial intelligence, and discovered how natural patterns form without blueprints, without predetermination. He moved between pure abstraction and physical reality with a fluidity that’s almost otherworldly. He could think about universal computation in the morning and sea shells in the afternoon, seeing connections between them that no one else could see.
And he wrote to his mother, trying to explain why these strange questions mattered, why anyone should care about machines that could imitate thinking under strict design principles or embryos that had all they need to become human within themselves, no need for programming or software. Whether she fully understood is beside the point. He kept trying to communicate across the gap, to make the esoteric accessible, to share his excitement about how the world actually works beneath the surface we see.
Forty-two years. Less than two decades of real productivity. And yet, without him, the world we inhabit today—the world of ChatGPT and artificial intelligence and universal computation—would look radically different, if it were to exist at all. Without him AI might be brute force chess matches. If we take the time to understand him, we glimpse a strong argument that AI is nothing to fear. On the other hand, be wary of all those humans on the scene hellbent on deceiving one another.
