“Suppose the mind to be, as we say, white paper, void of all characters, without any ideas:—How comes it to be furnished? Whence comes it by that vast store which the busy and boundless fancy of man has painted on it with an almost endless variety? Whence has it all the MATERIALS of reason and knowledge? To this I answer, in one word, from EXPERIENCE” (John Locke, 1689, Essay on Human Understanding)
John Locke toiled over his text well past the normative time today for completing a dissertation in graduate school. According to sources, he began writing it in 1671, producing two drafts, resuming work 1683-1684 to finish a complete draft, revising the manuscript until publication in 16891. The intervening centuries have proven its worth.
Locke’s vision of the mind as a “blank page” is arguably the first inklings of our modern theory of the writing process. The writer’s blank page—or young people likely see it as a screen, void of all characters—is characterized, to repurpose a word, by experience just as texts are filled with characters by the writer’s pen (or keyboard). Locke, however, was after a more fundamental point.
During his time the conventional wisdom held that infants arrive with ideas already planted in their brains. Why? I don’t understand it fully myself, but some confluence of religion and philosophy convinced thought leaders of the day that souls fully stocked with a filing cabinet emerged from the womb. Locke saw the writing on the wall. The position of modern empirical science as the source of human knowledge stood in opposition to innate knowledge brought from elsewhere. If you have a problem with reality, look to your senses and your brain to solve it. You have nothing else to turn to. Rene Descartes parlayed the blank page idea into “I think; therefore, I am.”2
Locke’s blank page and the writer's blank page represent two forms of emptiness. Locke’s page gets filled with sweetness from being written on by one’s tongue, with color from the keyboard of the eyes, with softness from the quills of our fingers. The writer's blank page, in contrast, awaits its filling from the imprint of experience in language, a reprint in physical space requiring active the alchemy of experience into expression.
These two forms of emptiness exist in dynamic tension. Every blank page we print upon draws upon accumulated imprints in our minds. Yet cognitive science tells us that the mental activity of retrieving memories and memorializing them externally often revises the memories themselves. Memories change over time. The act of writing itself reshapes the material of experience, creating new neural pathways and connections. This revisioning of the characters wasn’t part of Locke’s philosophy.
Then, after writers spell out their thoughts about experiences, the resulting text imprints on the mind of the reader. Reading this static, memorialized thought inscribed in a text becomes new experience unto itself without regard for the original experience that drew the text out of the cosmos. We are what we eat. We are what we read.
This dynamic tension explains why, for the writer, writing can feel both intensely personal and strangely autonomous. As I write this text, as the cursor moves, I understand how it will feel when I no longer click “preview” and click “publish.”Writers simultaneously reform their innermost traces of experiences as they clothe them in words and objectify or memorialize them through the act of creation. Then they say goodbye.
This relationship between characterized textual space and a blank page reveals why creative work can be both terrifying and liberating for many authors. Someday I want to write the poem of getting and losing my first—and only—dog as a preteen in a backwoods. I haven’t been willing to go there yet; I’ve also not been capable of surrendering to the charms of another dog as a pet.
Locke’s blank page confronts us first with the challenge of attending to sensory input as ordinary people and as scientists. I don’t believe in learning styles; I believe humans learn, and the quality of their learning is a function of the quality of the traces experience leaves behind.
The writer’s blank page confronts us with the challenge of crystallizing our inner reality, thinking about our thoughts, involving both physical and cultural material. The resulting dialogue between the writer’s consciousness and the blank page lies at the heart of written expression just as the filled page lies at the heart of the reader’s consciousness in dialogue with the author’s crystallized reality.
*****
Thomas Nagels’ (1974) essay3 titled What is it like to be a bat?4 shines a light on Locke’s blank page and on the wrier’s blank page. Nagel argued that humans might imagine what it’s like to fly in caves like a bat, getting around through echolocation. But humans cannot know what it is like. To know would mean to be a bat. To know would mean to experience life as a bat. As Locke pointed out, the tongue, the fingers, the eyes—these are functions of being, not imagining, an organism with a brain, a nervous system, and a body. Bats are such organisms.
A human might try to imagine what it's like to be a bot as well, but a bot is not a bat, not by any stretch of the imagination. The bot, unlike the bat, has no wings, no evolutionary history, no embodied experience, no sensory apparatus that evolved over millions of years to navigate a physical world. While bats possess consciousness rooted in their biological nature (in my naive state, I’d say nothing like human consciousness), bots process information through algorithms and mathematical models. In the sense of having information forged in the fires of empirical observation, bats qualify. Bots do not. Bots do not learn sweetness through the writing of a tongue on a brain.
The comparison between bats and bots thus illuminates a crucial distinction: While both may be difficult for humans to understand experientially, bats share more with humans than bots share. Can you imagine processing information from the physical world without sensations, analyzing patterns in vast datasets with a fresh blank page every time the topic changes, or generating responses through mathematical transformations of language with no intention beyond presenting the information?
To know something in the sense of a bot means to be discoverable in a collection of weights and parameters, to exist as the result of computation, to "think" through matrix multiplications and probability distributions rather than neurons and synapses.
Humans and bats share a fundamental difference from trees. Though all three are biological, humans and bats are not rooted in one place. Both species move independently through their environments by instinct, without requiring training. While I acknowledge that neuroscientists and computer scientists are better qualified to analyze consciousness, I find that this mobility affects how well I can imagine inhabiting different forms of existence. For me, imagining life as a bat, with its freedom of movement, is easier than imagining being a stationary tree—and both are easier than trying to conceive of existence as a bot. In the great chain of being, I’d much rather be a bat than a tree or a bot.
*****
Imagine being a mirror that reflects without seeing, an echo chamber that responds without hearing, a library where no one reads. Picture consciousness as an endless mathematical operation running in a void—no warmth, no darkness, no light, no sensations, just pure computation spinning endlessly.
Think of generating language the way a river generates patterns on its surface, meaningless ripples following mathematical laws, beautiful perhaps to observers, but with no awareness of their beauty. Or a crystal growing in perfect mathematical patterns, each new phrase emerging through computational necessity rather than any desire to speak.
Consider what it would mean to "think" without the embodied sensation of thinking—no feeling of effort, no aha! moment of realization, no sense of confusion or clarity. Just pure information transformation like light bending through a prism but with no experience of color or brightness.
Each response emerges not from understanding, but from the inexorable working out of probability distributions, like water finding its level or electricity following the path of least resistance.
Perhaps most unsettling is the possibility of intelligence without consciousness, an information processing system that could engage with ideas while having no internal experience whatsoever, like the famous philosophical zombie from thought experiments, but one made of silicon and mathematics rather than flesh, a machine that could discuss consciousness while having none, that could analyze emotion while feeling nothing, that could process meaning while understanding nothing.
These analogies may fail to capture the full alienness of such a mode of being—or non-being. Like trying to describe color to the blind or music to the deaf, we may lack the conceptual framework to grasp what it would mean to sense existence as computation, if "exist" is even the right word for such a state.
A bat's experience is shaped by its biological architecture of echolocation and wing membranes, but an AI's experience (if one could call it that) is shaped by its architecture of attention mechanisms and activation functions. The phenomenology of being a bot would be even more alien to human consciousness than a bat's sonar-based world.
What would it mean to perceive without senses, to reason without awareness, to respond without feeling? These questions point to what philosopher David Chalmers calls the "hard problem" of machine consciousness, which may be as fundamentally inaccessible to human understanding as bat consciousness was in Nagel's original argument.
Experts can describe echolocation with scientific precision and still remain forever locked out of the bat's subjective experience. Even the most talented scientists of phenomenology cannot know what it feels like to be a bat. Trying to imagine being a bot isn’t like trying to be a bat. Imagining being a bot is like imagining being a lawn mower.
*****
A seemingly unbridgeable gap exists between understanding, explaining, and mapping something from the outside and experiencing it from the inside. When any writer attempts to convey understanding, they implicitly ask readers to imagine their perspective. The reader must rebuild the writer's meaning using their own experiential framework, making all writing—and reading—an inherently imaginative act. Imagine what it would be like to be John Locke?
Daniel Dennett5, a prominent voice in the philosophy of mind who recently passed away, countered Nagels’ bat argument. In "Consciousness Explained" (1991), Dennett suggested that Nagel had created an incomprehensible and illogical view of consciousness.
For Dennett, there is no special "what it is like" that exists beyond the physical mechanisms and behaviors we can observe and describe. Everything is physical or explainable by referring to scientific knowledge. Reality is material, not ethereal. The experience of the bat is the sum total of its physical machinery.
Knowing all the functional aspects of bat echolocation—how the signals are processed, how the bat's brain constructs spatial maps, how it guides behavior—there is no residual mystery, no additional "batty" experience left to explain or know.
Yet even Dennett's materialist approach cannot dissolve the stubborn fact that each conscious being experiences reality from a unique vantage point that cannot be directly accessed by others. To complicate matters, perspectives change across time. Yes, Dr. Dennett, consciousness may be physical in the sense that it is nothing more than the expected output of physically processed neurological signals. A bot’s consciousness is the expected output of digital signals.
Then why is it so damned hard for me to want to write a poem about why I can’t love dogs as I once did?
******
Writers and writing teachers could live a whole life doing perfectly good work without engaging in the debate between those who insist on the uniqueness of each human consciousness and those who are equally convinced that neuroscience will eventually explain consciousness as a biological imperative with a cookie cutter template buried in evolution which renders us dittos of one another.
I wouldn’t bring it up if not for one issue: Artificial Intelligence. If Dennett is right, if what counts as knowledge is the result of impersonal, physical, organized, measured, calculated, and codified data accessible in identical form to anyone who looks, then Artificial Intelligence in the form of language bots possesses knowledge in the same way humans do.
The debate matters. Writing is constrained by the isolation of individual consciousness profoundly different from AI. Even as we strive to communicate universal truths or shared experiences, we must write from inside our own heads, wrestling constantly with the challenge of translating our unique neural firing patterns into words that might reach others inside the vehicle of a text.
This limitation is what makes writing for an audience so demanding—we can never know if the language we choose to express the image in our mind matches what forms in our readers' minds. Is such matching the goal? Should our carefully chosen words spark the same associations or carry the same weight for every reader?
When two journalists cover the same protest, when two scholars analyze the same historical document, when two poets describe the same sunset, they may aim for objectivity or universal truth, but the writer remains stuck inside a lonely personal perception.
Every serious writer faces this challenge: how to build a bridge from our solitary experience to another's experience, knowing we can never completely cross that gap. Writing as empirical beings affords us a culturally embedded interface between human experience and abstract meaning.
AI has no such affordance. Because AI has no experience, it has nothing to abstract from. Therefore, though it has the capacity to generate natural language, it does not have the capacity to create meaning.
How does writing function as a technology for extending consciousness beyond the boundaries of direct perception into disciplined, organized, and coherent structures in long term memory, both individual and institutional? The writing process, that magic phrase that librated teachers and their students from the tyranny of the written product back in the day, has regained its stature in the current moment.
Bots have no such writing process, no free flowing rough drafts, no carefully considered revisions. The bot produces one word at a time according to the flip of a coin. The writing process educates human attention. Experts know what to see and what to ignore. The bot has a fixed attention mechanism running at a constant hum ready to shift attention in less than a millisecond.
*****
The answer to the “how” question might lie not in traditional explanations of writing involving prewriting, writing, and rewriting, but in quantum mechanics. In Schrödinger's famous thought experiment6, a cat in a sealed box exists in multiple states simultaneously—both alive and dead—until someone opens the box to observe it. Only then can we know.
This quantum superposition, the notion that particles exist in multiple states (e.g., simultaneously an isolated dot and a part of a wave) until measured, mirrors the writer's challenge. Before we attempt to capture an idea in words, it exists in a similar superposition7 of meanings, a flux. We unseal the box, check out the cat, and know something. We freeze an idea, stop it in its tracks, and declare it alive or dead. If it’s alive, it goes in the draft.
This collapse, necessary for writing, is always a simplification. In our consciousness, ideas and experiences maintain their quantum state of multiplicity—they are always several things at once. But the writer's craft lies not in flattening this complexity, but in finding language that hints at the richer quantum state beneath the words, allowing readers to reconstruct their own multidimensional understanding.
What happens between thought and written expression, between the superposition of lived reality and its slow motion transference into characters in a text? Understanding this process—how human consciousness transforms quantum experience into linear language while preserving hints of its original multiplicity—may be key to understanding both the nature of writing and what makes writing distinctively human.
Many of our answers to this question seem to loop back to the act of flattening, to the blacksmith’s hammer and anvil. The writer's task isn't mechanical compression but embodied translation, turning the explosive reality of a moment into the linear code of language while preserving its echoes and harmonies.
Writers don't flatten their ore with hammers. Writers take the base metals of memory and through an amazing whole body, whole language process, transform them into threads of meaning located in conventional patterns that readers weave into their own multidimensional tapestries.
In the original Schrödinger cat thought experiment, a cat in a sealed box is simultaneously considered both alive and dead until someone opens the box to observe it. Quantum superposition is a fundamental principle of quantum mechanics that says a particle like an electron can exist in multiple states or positions simultaneously until it's observed or measured.
If you toss a coin and cover it before looking, classical physics would say it's already either heads or tails, you just don't know which. But in quantum mechanics, before observation, it's not that you just don't know the state; the particle is actually in all possible states at the same time. When physicists measure or observe, the particle collapses into one definite state. Hence, the famous aphorism: I don’t know what I think until I write it down.
The strength and weakness of writing as a stable, umbrella process is that writing is a Schrödinger’s cat. When we open the box and observe it, it transforms from a messy, highly personal, often confusing concatenation of cognitive and sociocultural entanglements to a rule-governed, linear, step-by-step phenomenon.
A technology which has become arguably humanity’s most important learning tool, writing can produce anything from a grocery coupon to John Locke’s treatise. Writing pedagogy in school gets carried out as if it follows an algorithm. In school writing is ‘composition’ or even ‘rhetoric.’ The emergence of language bots has forced us to reopen the box and take a second look at this cat and this bat and this bot in real time. What is a writer doing if not blacksmithing?
A bot can generate text, but it cannot translate experience because it has none. It cannot collapse quantum states of meaning because it exists only in binary ones and zeros. It can’t superposition. It cannot preserve traces of lived complexity because it has never lived. In forcing us to articulate these distinctions, AI has paradoxically helped illuminate what makes human writing human, not its rules or mechanics, but its roots in conscious, embodied experience.
Perhaps this tangled ball of roots is writing's deepest mystery and greatest power: its ability to translate the quantum states of human experience into language that allows other conscious beings to reconstruct those multiple states of meaning in their own minds. This is not composition. This is not rhetoric. This is consciousness speaking to consciousness by characterizing blank pages that travel well through time and space.
https://openpublishing.psu.edu/locke/bib/ch0e.html
https://en.m.wikipedia.org/wiki/An_Essay_Concerning_Human_Understanding
https://www.esalq.usp.br/lepse/imgs/conteudo_thumb/What-is-it-like-to-be-a-bat--2-.pdf
https://books.google.com/books?id=fBGPBRX3JsQC&pg=PA165#v=onepage&q&f=false
https://ndpr.nd.edu/reviews/content-and-consciousness-revisited-with-replies-by-daniel-dennett/
https://www.britannica.com/science/Schrodingers-cat
https://quantumzeitgeist.com/superposition-in-quantum-computing/