Where Have All the Human Texts Gone?
The answer is not blowing in the wind
Historically, texts are, ipso facto, explicit. They mean what they say. They have well-wrought surfaces, they are potentially public, accessible under the right conditions, durable. Whether carved in stone or engraved on paper or displayed on screen, they are physical objects bound by quantum physics like trees, rocks, and clouds.
Unlike trees, however, texts are creations from the spirit world, containers of semantic potentials which must be mortally activated during cultural rituals. Unlike rocks, texts have no mass nor weight, they are essentially inessential, they can’t become castles or walls or nuclear reactors. Unlike clouds, texts do not rain nor rise up like horses in the sky on blustery days.
So what are they? Where do they come from? What good are they?
A blast from the shadowy past
These sorts of questions have been around for as long as texts have existed as objects. The oldest question was practical: what can a text hold? The earliest writing in Mesopotamia, around 3200 BCE, arose from a system of clay tokens used for bookkeeping, for recording grain, livestock, and trade goods.
The Greek philosophers asked whether text was good for the soul. By the fifth century BCE, texts were abundant in Athens — legal, commercial, literary. The question Plato raised through Socrates was a moral and epistemological worry: do texts make us wiser or make us feel wiser? Do texts deceive us?
In Plato’s Phaedrus, the Egyptian King Thamus tells Theuth, the inventor of writing, that his creation will produce not memory but the semblance of memory, not understanding but the conceit of understanding. The text gives the reader something to point to, but what is the point of pointing?
The Sophists, whom Plato despised, asked the most dangerous question of all without mentioning texts. Their art was oral, but their claim that logos could make the weaker argument stronger becomes more potent once logos is rendered into text.
If logos can make the weaker argument stronger, and if text is logos made permanent, portable, and repeatable, then text is a Sophist’s secret weapon, because it does its persuasive work in the absence of its maker, over and over, without the audience being able to cross-examine the speaker. A spoken deception dies when the speaker leaves the room. A written one does not.
So the Greeks asked three things:
Does text weaken the mind by replacing real knowing with its appearance?
Can text transmit the kind of knowledge that matters most?
Can text deceive because it is explicit, well-wrought, and unbound from its maker’s presence?
The Renaissance speaks on text
Centuries later, Erasmus asked a question that shatters the notion that texts are mechanically and self-evidently explicit: Has the text we've been reading for a thousand years even been copied correctly? He warned that "…often through the translator's clumsiness or inattention the Greek has been wrongly rendered; often the true and genuine reading has been corrupted by ignorant scribes, which we see happen every day, or altered by scribes who are half-taught and half-asleep."
Montaigne pushed the question of trustworthiness inward toward the reader/thinker. He refused to treat his essays as finished, revising them without changing the earlier layers. One can textualize what one thinks in the moment, which can remain valid later when one’s mind has changed even if the changes are contradictory. One’s reading of the same text can change over time; experience, not language, informs the mind. If the text drifts, what about the reader?
He didn’t ask whether texts could be trusted. He asked whether the self that encounters them is stable enough to receive them, whether the world in which we exist is even knowable enough to pin down explicitly. The knower shifts; therefore the knowing shifts; therefore the text means differently each time it is read. He declared, "I study myself more than any other subject. That is my metaphysics, that is my physics."
Luther drove the question into the streets. If the Bible could be printed in German and placed in the hands of any literate person, who decides what it means? The priest? The reader? The text itself?
Luther’s answer — scripture alone, sola scriptura — sounds like a vote for the explicit surface of the text in that it authorizes the text of the Bible as its own warrant, but it actually opened an abyss. Once the institutional interpreter is removed, the text circulates among millions of readers whose tacit resources differ enormously, and the “plain meaning” splinters into a thousand readings, and different denominations can take root. Gutenberg made this possible. Luther made it inevitable.
So the Renaissance asked three things:
Has the text survived intact, or has the explicit surface been corrupted by the very hands entrusted to preserve (or translate) it?
Is the reader stable enough to receive a text, or does the knower’s shifting inner life make every reading a different reading?
Once the institutional interpreter is removed and the text circulates freely, who decides what it means?
Looking backwards into the future in the 19th century
By the nineteenth century, the world had begun to emerge as global, and the past was becoming murkier and murkier. The denizens of this century asked a cutting edge question: what does it take to understand a text that was written by an alien who lived in a time unrecognizable to you?
The ancients assumed a shared world between writer and reader. The mainstream Renaissance humanists worked to recover corrupted texts but assumed that once restored, the classical authors would speak clearly to any educated person. Malcontents like Montaigne and Martin Luther cut against the grain of authority and ushered in a world more familiar to us today with our grasp of reading as a discursive process subject to discrete conventions and protocols.
Shaped by historical consciousness, by colonialism’s encounter with radically different cultures, by geology’s demolition of biblical chronology, the 19th century could no longer assume that shared understanding was automatic by way of textual explicitness. The gap between the context in which the writer wrote and through which the reader reads became the problem itself.
Schleiermacher proposed a “universal hermeneutics” that would apply to any text and any act of understanding, not just Scripture. His method had two dimensions: grammatical interpretation, which analyzed the language of the text within its linguistic system, and psychological interpretation, which attempted to reconstruct the mental life of the author. Montaigne performed this insight on himself; Schleiermacher built a methodology around it.
Understanding a text meant recovering the act of its creation by entering the mind that made it. The method he devised he called the hermeneutic circle. A reader cannot understand the parts of a text without understanding the whole, and that reader cannot understand the whole without understanding the parts. Understanding is not linear extraction. It is an iterative, circular, never-quite-finished act of interpretation.
Dilthey extended Schleiermacher’s project, arguing that human understanding is shaped not just by language but also by the historical and cultural context in which an individual lives, writes, and reads. He drew a sharp line between the natural sciences, which explain (erklären) through intractable causal laws, and the human sciences, which understand (verstehen) through shifting interpretation.
A rock can be explained. A text must be understood. And understanding requires the interpreter to bring something of themselves — their own historical situatedness, their own lived experience — to the encounter.
In the late 19th century dyslexia quietly reinstalled the very assumption that Montaigne, Schleiermacher, and Dilthey had dismantled. If the problem is in the child's neurology, then the text is functioning exactly as it should, delivering meaning to anyone whose decoding machinery works properly.
Comprehension becomes a technical problem, not a hermeneutic one. The reader's shifting inner life, historical situatedness, and tacit resources disappear from the account. What remains is a pipeline: text appears, decoding happens, meaning arrives. Fix the decoder, fix the reader, and all is well. This medical framing migrated directly into 21st century reading instruction, where it became the foundation of phonics-first legislation and the science of reading movement.
So the nineteenth century asked three things:
What does it take to understand a text written across an unbridgeable gap in time, language, and culture?
Is understanding a text fundamentally different from explaining a rock — and if so, what does the reader bring that the text cannot supply?
If a child of normal intelligence cannot read a text, is the problem solved in teaching the child to decode, assuming that texts explicitly deliver meaning on their own?
Looking forward into the past in the 20th century
Scientists and philosophers of the 20th century spent a hundred years demonstrating, from multiple angles, that the explicit surface of a text is not where the action is, only to end up, with E.D. Hirsch, returning to the safe ground of literalism at the turn of the new century.
In 1908, Edmund Burke Huey published The Psychology and Pedagogy of Reading, the first scientific treatise on reading as a cognitive act. He wrote that "to completely analyze what we do when we read would almost be the acme of a psychologist's achievements, for it would be to describe very many of the most intricate workings of the human mind."
Huey studied eye movements, inner speech, word recognition, and the role of context, demonstrating that reading was not a simple act of decoding, but a complex interaction between perception, language, memory, and expectation. Huey’s interest in eye movements was singled out and became, ironically, the substrate for research which later in the century would argue that reading IS decoding letter by letter, word by word, and understanding follows as a byproduct.
Saussure, the great linguist, showed that words don’t point to things; they point to other words in a structured system of differences. He divides the sign into two components — the signifier (sound-image) and the signified (concept) — and stresses that the relationship between them is arbitrary. Unlike assumptions of dyslexia, accessing the sound-image is but the beginning of the real work of reading.
Saussure's insight — that meaning arises from a system of differences, not from a direct correspondence between word and world — found its way through Chomsky into reading theory by way of the psycholinguists of the 1960s and 1970s. Kenneth Goodman and Frank Smith, drawing on the same structural logic, argued that reading is not primarily a visual or decoding process but a language process.
The reader does not march letter by letter across the explicit surface, converting graphemes to phonemes and waiting for meaning to arrive at the end of the pipeline. The reader brings the entire system of language and all of its tacit resources to bear on every encounter with text. Smith put it bluntly: the reader contributes more to reading than the text does. Meaning is made by a knower who brings the linguistic system to the surface, not extracted from a surface that contains the system.
The linguists and psycholinguists earlier in the century insisted that reading the surface of the text, attending to the words on the page while expecting meaning to magically form, fundamentally misconstrues the act of reading, which is powered by the reader’s consciousness as much as by language.
The post-structuralists went further and dissolved the author. They set meaning adrift, some with a cord attached to the text, keeping the vessel close to shore, others positioning the reader as a sort of Captain Ahab, authorized to chase the white whale to the ends of the earth reminiscent of Montaigne.
Vygotsky anchored language in social interaction and thereby implicated not just the solo reader, but others in community with the reader in the act. Bakhtin insisted every utterance carries the voices of everyone who ever used its words; the whole notion of an author is problematic. Kintsch later modeled comprehension as construction, not retrieval, and argued that every reading is undertaken inside a situation which has something to say about what the text says.
P. David Pearson, working from the Center for the Study of Reading at the University of Illinois in the late 1970s and 1980s, translated the century’s theoretical insights into a practical account of what readers actually do. With Richard Anderson, Pearson developed a schema-theoretic view of comprehension, showing how the reader’s knowledge already stored in memory functions in the process of interpreting new information and accommodating it to become part of the knowledge store.
Pearson and his colleagues worked to uncover the skills and strategies involved in successful reading comprehension, and as they became more involved in classroom-based research, they began to question the portrayal of reading comprehension as a linear progression from decoding to fluency to understanding.
In this transformation process, what began as the author’s ideas becomes the reader’s ideas. The text proposes; the reader disposes. Pearson’s work made the theoretical case empirically concrete: reading comprehension might begin with extraction from an explicit text, but it evolves as an act of construction, and what the reader brings to the encounter is not supplementary but constitutive.
Marie Clay, a New Zealand developmental psychologist, built something none of the theorists attempted: a system that worked with real children. Her Reading Recovery program, developed in the 1970s, treated the struggling beginning reader not as a broken decoder but as a meaning-maker whose tacit resources — language knowledge, story sense, directional awareness, self-monitoring — needed to be orchestrated, not drilled in isolation.
Clay observed children closely, catalogued what they could do rather than what they couldn’t, and designed one-to-one interventions that began where the child was, not where the curriculum said the child should be. She understood that reading is a complex problem-solving activity in which the child draws on multiple sources of information simultaneously, precisely the integration of mind, language, and explicicity that this essay has been describing.
What distinguished Clay’s work was not just its theoretical sophistication but its insistence on teacher expertise. Linda Darling-Hammond, in her 2017 Learning Policy Institute study of effective professional development, identified Reading Recovery as an example of a program that possesses all seven elements of effective professional development and has been found to generate positive student gains.
The findings suggest that the Reading Recovery professional development program is capable of positively impacting student achievement on a large scale (see here). In a field littered with programs that train teachers in scripts, Clay trained teachers to think — to observe, hypothesize, and respond to what a particular child does with a particular text on a particular day. That is hermeneutics in practice, whether Clay would have used the word or not.
From every direction — linguistics, philosophy, psychology, cognitive science — the verdict converged: meaning is not on the page. It is assembled by a knower whose tacit resources the text can summon but never supply.
The knowing of how to make, enter, and move through a text is exclusively human, almost entirely tacit, somewhat mysterious. The knowing is in the space between thought and language, entirely interior and therefore impossible to witness and explain.
During the 21st century humans developed the capacity to clone texts.
So the twentieth century asked three things:
If meaning is not on the page but is assembled by the reader, what exactly is the text contributing — and what happens when we build an entire educational system on the assumption that the text does all the work?
Can reading be taught as decoding when every serious theory of comprehension describes it as construction?
If a machine can produce a well-wrought surface without consciousness, is the surface still a text — or is it something else entirely?
Texts created by machines
Every era in this history asked its question about texts assuming that a human being stood on both sides of the encounter. Plato worried about what writing would do to memory, but he never doubted that memory was a human faculty. Montaigne questioned whether the reader was stable, but he never questioned whether the reader was human. Schleiermacher built a method for crossing the gap in time and space between writer and reader, but both remained human with inner lives, historical situations, and bodies that had moved through the world.
The large language model eliminates both sides of the encounter entirely. It neither reads texts in any way congruent with what centuries of human thinkers have theorized, nor does it write texts from an embodied world. It produces something — fluent, grammatical, well-formed something — without having authored it. No consciousness stood behind the words.
No forge shaped the metal. No tacit residue of gesture, loss, confusion, desire, or lived experience informed the selection of this word rather than that one. The surface was generated by statistical regularities extracted from a corpus of human texts, then reassembled according to probability. It is text cloned from other texts: an explicit surface derived from other explicit surfaces, with no interior at any point in the process.
A human text, however crude, is the trace of a consciousness that thought something and reached toward language to say it. The cracks between thought and language — the cracks this essay has been mapping from the Greeks to the present — are where the meaning lives. A cloned text has no cracks. It was never thought. It passed through no space between interior life and linguistic expression because there was no interior life. The surface is all there is.
Reading, as every thinker in this history understood, is not extraction from a surface. It is an encounter between two subjectivities — the reader’s and the author’s — mediated by the explicit surface of the text. The reader enters Schleiermacher’s hermeneutic circle, moving between parts and whole, bringing tacit resources to bear, reconstructing something of the mind that made the text. But when the text was not made by a mind, the circle has nothing at its center. The reader reaches for the author and finds no one home.
Inca Garcilaso de la Vega, in his Comentarios reales of 1609,1 tells the story of two Amerindians sent to deliver ten melons along with a letter counting the fruit. Curious about melons, which they had never tasted, they wanted to eat some but feared the letter would tell on them. One proposed a solution: hide the letter behind a wall so it could not see them eat. His companion agreed. They hid the letter, ate two melons, and were caught — because the letter did not need to see. It only needed to repeat what a human had written, unchanged, the same words over and over, which is what Socrates said all texts do.
They attributed to the explicit surface a power it does not possess — perception, witness, consciousness. We are making the same error in reverse. We stand before the LLM’s fluent output and believe it has thought because it has language. We attribute to the cloned surface the powers of the consciousness that ordinarily produces text — the forge, the body, the cracks where meaning lives. But the LLM’s text is Garcilaso’s letter, not the mind that wrote it. It cannot see us. It cannot think for us. It cannot read.
Where have all the human texts gone? Nowhere. But for the first time in the history of the written word, they share the world with their clones. The question is whether we can tell the difference — and whether we still care enough to try.
So the twenty-first century must ask three things:
If a text was never thought by anyone, can it be read — or only processed?
When the hermeneutic circle has no author at its center, what is the reader reconstructing?
If we can no longer tell a human text from its clone, have we lost the text — or have we lost the reader?
Cut and paste this URL into a browser. The source pdf will immediately download.
https://archivo.revistas.ucr.ac.cr/index.php/filyling/article/download/4474/4291/0

You took us on quite a journey there! But what a journey!