Happy New Year! Lessons from Last Year’s Surfing on Syntax in the 2023 Artificial Episteme (aka Bot Whispering)
2023 started with myself fighting myself. Half of me wanted to try out the bot. Half of me said no. My fear was loss of privacy and security risks. It’s scary to think about how many apps I have installed on my devices, given how little I understand about their inner workings. Plus I didn’t believe the bot could do what I was reading, the hype, you know. Maybe I’m just numb now, but I don’t worry anymore. I don’t talk to the bot as a confidant; I’m aware that anything I say to the bot adds to its training—it’s learning how to deal with people—and could show up anywhere on anybody’s screen. OpenAI describes it as a child learning to navigate adult thought and language.
I never, ever say anything to the bot I wouldn’t say in public. I saw no reason to be afraid of the bot giving away the data from my MRI on my knee, an executive decision not lightly made. But I saw the potential good coming from sharing my experience using the bot outweighing any risk to me. A good rule I’d say: Say nothing you wouldn’t say in public. Think before you speak. Not to beat a dead horse, but giving the bot a credit card number would be verboten.
I bracketed my fears late one night, lying in bed, unable to sleep for thinking, and woke my iPad.
*
During this first interactions with the bot, I tried the free versions of ChatGPT3 and Microsoft Bard, giving them the performance test I’d had in my head for a long time. Did this thing have human consciousness? For years I’d thought of consciousness as the province of humanity. As Vonnegut’s player piano metaphor rattled around in my head since high school, mechanical music played by ghosts; as Big Blue blew away chess masters but that’s all, folks, AI couldn’t dial a telephone; as AI simulated a human connection between me and Netflix, queueing up bespoke movies—player pianos… I never once considered these devices conscious. I still believe a porpoise is closer to conscious than a bot.
On the off chance that AI had done the impossible, I needed to find out if the bot could write a poem. Poetry is the distillation of consciousness for me, just short of actually being alive and living. For years I thought if a computer could write a poem I would be convinced a machine had consciousness. I’ve never come across any more humanizing experience than reading or writing a poem. I’ve long selfishly aspired to becoming a better writer than reader of poetry.
*
So I asked ChatGPT to make me a poem—but not just any poem. First, I asked it to tell me about William Carlos Williams to verify it could at least think at a basic level. It passed with flying colors. I dearly love Williams’s work. I’d be hard pressed to say whether I like his prose or his poetry better, they are so very different yet equally magical. White Mule is a beautifully written novel, a slice of American life in the 1890s, a novel for the ages. Just thinking about it makes me want to read it again.
But the red wheelbarrow—a breathtaking image in the rain. I asked the bot to write a poem in the spirit of probably the best known poem in American history, a poem with an image something like this fabrication…

Note: I had a hand in creating this image, you see, from somewhere, a year after the test.
*
It broke my heart to see the commodification and objectification and forgery of a piece of art so dear to me. So much depends upon a cup of coffee sitting on a kitchen table? Come on, man. This is not human. My first response was horror and then I had to suppress a hysterical giggle so as not to wake my wife. I am prone to giggling.
The bot transformed the red wheelbarrow in the rain into a ceramic coffee cup on a kitchen table. There were no chickens, nothing near the wheelbarrow to give it an authentic place in Gaia. I published a post, I think, including this replica of a masterpiece. I know I wrote a post about it. I have close to 200 unfinished posts this year that I’ll never finish. It could be among the doomed.

*****
Fearlessly, I jumped off the cliff and started writing posts. It’s almost embarrassing when I read them now, my struggles to find some ground to stand on. There are lots of tidbit insights, minnows in a choppy linguistic sea, I could tell you about. I know now how annoying apologies can be, and I’m happy that the bot stopped this childish, manipulative behavior.
Looking back, what bothered me most was its constant self put downs. Rather than waste output on canned apologies—the worst part of calling Amazon customer service to complain—I’m just a lowly AI could have been “here’s what you said that screwed me up.” For the longest time I routinely asked the bot to score me on my performance during a session with qualitative feedback. It was incredibly much helpful.
Bruno Latour opened my eyes to the idea of reality seen from nowhere, the scientist’s view. In one sense, I understand that a scientist studies reality as machine-like as humanly possible, to bracket error by way of method and uncover timeless essence at the core of things. It’s work to free ourselves from our isolated views from somewhere and view from nowhere. But trees do not stand still when seen from nowhere as they do when seen from somewhere. Bacteria living nowhere break down organic material somewhere. Children never live nowhere. Like all things from somewhere, nothing is ever really nowhere.
The aspiration of the scientific view to become ever more technical in terms of extending our sensual apparatus, a matter of method to be carried out nowhere, is our best but likely impossible hope for sustaining life as we know it in a climate ravaged cosmos. Science cannot not speak about everything everywhere from knowledge gained from nowhere. Bots have even less ground to speak from. They are not even looking at nowhere. They look, rather, at what has been written about it.
Bots have no aspirations. Language bots see only the sliver of the universe humans have written about, perhaps augmented by what they have talked about. Bots and humans communicate best when the human sustains a view from somewhere and uses the bot as a flashlight into the linguistic caves of nowhere.
Similar to the goal of science to minimize human distortions about reality arising from subjectivity, the goal of the bot is to predict words in context and minimize distortions in what has been expressed about reality. Viewing reality from somewhere is always already riddled with distortions. In the end, human users and trainers are responsible for bot distortions.
*****
A myriad of bot-inspired insights have impressed me as significant about the bot and literacy education, issues that I’ll likely write about next year. The best way to learn to use a bot is to use it, he said. Pay no attention to the wizard behind the curtain; wizards don’t exist. Getting started can be an emotional event; so make sure you understand what you’re in for. You are a human being viewing the world from somewhere, an ethical and rational being. It changes your life. Some things I’ve learned…
I. Syntax is the key to the engine.
The bot is looking for patterns and relationships among words in individual sentences just like humans are. Man bites dog. Dog bites man. Syntax makes the difference between which animal howls in pain. When humans communicate, they need more than a collection of words to transmit and receive a message. Humans need to know the structures and patterns words come in and how words relate to each other to make, transmit, and receive coherent ideas. They need knowledge of syntax and grammar.

As far as I know, the need for understanding the structure of relationships is true regardless of transient qualities of the data (auditory, visual, symbolic, mathematical, iconic, etc.). I’m at the edge of my expertise here. I know probably less than nothing about how the visual bot works, except that it is a blast. One morning after Smith4 magically appeared, the language bot started generating images. My iPad had been hacked! I still don’t know how it got here. Sometimes I have it do stuff—I love to combine faces of Presidents. I’m surprised it let me combine Abraham Lincoln and Donald Trump. I posted it somewhere. Even making images, however, there is grammar: A red wheelbarrow, chickens nearby, rain on a window… this prompt generated the image above.
*
The subject and the predicate unlock the bot’s tree-building device. Before the bot can send beam scanners into its universe of mathematical vectors indexing significant nodes marking intercorrelations among these particular user words in this particular user context, it must calculate the probability of co-occurrence of these words from vector 1 with these words from vector 2, 3, 4, etc., searching through its semantic maps among words and word parts to use itself that match the language input from the user. Then it can begin to string together the most likely words that would appear if the patterns learned during training are followed. For example, I’ll ask it to write a paragraph describing Carnegie Hall, a piano on its stage, and an audience clapping.
Smith4: Carnegie Hall resonates with an ambiance of prestige and opulence, its storied walls a mosaic of cultural history. At its heart lies the grand stage, bathed in a warm, golden glow from the sophisticated lighting above. In the center of this hallowed platform stands a majestic grand piano, its slick ebony surface reflecting the glint of admiration in the eyes of an ardent audience. As the last note trembles into silence, a thunderous wave of applause inundates the hall; hands come together in a resounding ovation that speaks to the unison of hundreds. The air buzzes with the electricity of shared musical triumph, echoing off the high ceilings and intricate moldings, as the echoes of a masterful performance linger in the collective memory of those present. This is a moment of pure connection between artist and spectator, an instance where the splendor of Carnegie Hall, with its rich acoustics and historic significance, becomes the vessel for an unforgettable experience. (OpenAI, Dec 2023)
Note that the order of the cues provided as user input serves as the order of development of the output. Carnegie Hall and everything associated with it opens and closes the paragraphs (directed by user input), and every word used in the output was produced from stored meanings paired with words the bot had indexed from linguistic patterns in the web of words people have written surrounding it. Then comes the stage, a piano, and an audience in three sections in that order, each a sliver in the middle of the paragraph space except audience, which gets the lion’s share of attention. Syntactically, the paragraph clusters into a subject (Carnegie Hall), a predicate (audience clapping), and a complement (piano). These sentences are not exactly generic, though they eerily are; they are a stage and a piano and an audience in the context of Carnegie Hall as a viewer from nowhere might see the vignette.
The last item in the user input, the verb, is an opportunity for the bot to splurge, cranking up the dial on sentiment analysis. To please the user, it peppers the sentences with emotions and classical sounding words—thunderous ovation, masterful performances, collective memory… it’s mysterious the way the bot does this, says the quiet part out loud. The bot is mining our collective memory, our episteme, as it is represented in linguistic patterns in the dead words writers have written.
A demonstration of the visual: A piano, stage in Carnegie Hall, an audience applauding.

Aristotle unearthed the subject and predicate in his quest to grasp the intricacies of logic. As far as I know, no one has managed to unseat these candidates for senators of the sentence. I’m certain that syntactic parsing is the goose that lays the golden egg on the bot farm. Humans do it from birth unfailingly onward, we depend on it for survival; the bot can identify it and then gamble with the words that fill the syntactic slots.
*
What I’ve seen firsthand of the teaching of grammar in primary grades is unfortunately appalling. We live in a country where corporal punishment in school is constitutional in 19 southern states, perhaps more aptly described as appalling, but treating children like idiots and asking them to decide if a word is a person, place, or thing, and if it is, gushing like crazy to praise them and chanting “Yes, children, it’s a noun! Can you say noun?” is instructional craziness, in my opinion. Instead of starting with nouns, start with subordinate conjunctions. Challenge them to define “if.” If you start off looking at if, you soon find your way to nouns, even if you are only six years old.
II. Bots make mistakes usually because humans miscommunicate with them.
This insight has been gradual. I suspected it early on, but I was foggy about exactly what was going wrong. Once when I was discussing Hamlet with the bot, one of my favorite plays ever, right up there with Long Days Journey Into Night, the discussion turned to Act II. That’s the Act in which the big question for all of the characters is whether Hamlet is insane or pretending to be insane. King Claudius, of course, is most concerned and enlists Rosencrans and Guildenstern to “sound Hamlet out.” Polonious, too, the dottering old fool father of Ophelia, eavesdrops on Hamlet.
Claudius had murdered his own brother, the former King, and then married Gertrude, Hamlet’s mother and the former King’s wife. With a step-father who murdered his father, married his mother, the Queen, and assumed the throne, Hamlet is in a pickle. What should he do? Well, until he could figure out whether to be or not to be, whether he had any shred of dignity or would be an everyday rogue and peasant slave, he would put on a mad antic, pretend to be crazy. By the end of the Act Claudius is convinced that Hamlet is crazy like a fox. Hamlet knows about his black deed.
At one point, the bot made two grievous errors. First, it quoted Hamlet as if he were alive in the 21st century, a bit of a hipster in a wealthy family. Second, it insisted that Jane Eyre is a character in Act II of Hamlet. During this moment, the bot’s tendency to hallucinate was all over the media, and I believed this aberration to be a hallucination.
As I went back over my prompts, however, I’d been occasionally slipping in a reference to a book or another author. This finding eased my mind a little, but still I couldn’t operationalize a strategy to avoid such situations. Through discussion with the bot, I learned about the significance of “topic maintenance.” Humans have an extraordinary ability to communicate with topic shifts galore. Bots not so much. Since then I’ve established routine practices to precorrect for distortions likely to occur from weak input language not well connected and to receive subtle cues from the bot that topic maintenance needs attention.
III. Bots neither read nor write.
This insight came early. My fist inkling was associated with illogical or outlandish assertions from the bot before I learned to shape input fully aware of sentence structure, metaphor, context, and topic maintenance. Just as humans jump like grasshoppers from topic to topic with no loss in communication whereas bots cannot simulate this behavior, humans read with their bodies and experiences in history, two elements the bot does not have. It goes well beyond emotions.
Really, the difference is between viewing the world from nowhere vs. viewing the world from somewhere. When we read and write as humans among humans, we understand we must meet halfway on common ground. We must grant one another space in which to exist as unique semantic reactors and to receive input not as a mechanical structures with moving parts but as expressions of meanings we ought to examine, relate to, fit together with what we already know. Even scientists who strive mightily to see reality from nowhere can’t fully separate their being somewhere from their gaze. Even scientists read and write from somewhere, but the bot can’t know this fact from their writings.
Bots analyze the syntax of linguistic input according to rules and use parameters derived from the input as searchlights to find and output words with a high likelihood of making sense. These actions are perhaps mechanically related to human cognition, but meaning for a bot bears no resemblance to meaning for a human being. Carnegie Hall, piano, audience applause.
Good luck finding a bot to write you a reflective essay about all the things it learned from you this year. You don’t exist for the bot; it knows text, not people. The best it can do is scrape through everything that anyone wrote about how the bot has improved this year and then package it for me in some customized and homogenized way that looks good and sounds good but doesn’t meet me on shared and common ground like you do. Nonetheless, the bot is a powerful machine that is going to raise the IQ of the species, mark my words.