Introduction: Beyond Digital Access
My title immediately suggests that while raw data is widely available online, there is work remaining to shape data into meaning once access has been gained. We might infer that the era of marveling about effortless data retrieval may have seduced our attention from the significance of the gap between information/data generation expressed in human syntax (prompting and reading AI text) and classic understanding of human meaning (thinking, feeling, connecting, responding, applying) as the spot where the real work of educating humans to engage AI tools begins.
Venturing deeper, we might find that the gap between raw data and meaning suggests a reality in which meaning doesn't exist in language but is made by and for humans for some evolutionary reason, perhaps to prevent human extinction from unrelenting boredom. Ultimately, this gap reveals that inert information is essentially meaningless until human intelligence expands and connects the data. What we decide to mean from our toiling in the space between AI output and human language behavior matters. It is our choice.
In this essay I will look at some literary texts which are meaningful to me as I think about the conundrum exposed by this gap. David Perkin's classic phrase "the fingertip effect" in his early forays into distributed cognition comes to mind. Perkins commented on an all too frequent assumption we make about what goes on in the gap between information and meaning: If a tool is available at our fingertips, we will use it. Case in point: Learners will be prepared to use the tool of a textbook to learn if we provide it to them. All we need do is assign it and reinforce it by lecturing about it. Information becomes meaning in the gap by passive reception and reinforcement.
The Challenge of AI Text Comprehension
Taking the time to explore the territory between prompting an LLM and using its generated language to construct meaning inevitably moves into philosophical territory. The question of reading comprehension itself becomes highly important. In the gap where text is artificial, comprehension isn't the same phenomenon enacted when text is created by a human. The assumption that someone had a meaningful intention designed into the cues within the human text is valid and shapes our interaction. Not so with simulated text.
Perspective Without Intelligence
Instead, AI text includes what any given group of humans probably might collectively say if asked to weigh in on what caused the Civil War or something. Try it. What caused the Civil War? Then try what caused the Civil War from the Southern perspective? If the text represents a perspective, that perspective must be carefully inscribed in the prompt entered into the machine. What level of experience do our students have in this regard?
The machine has no intelligence. Comprehending AI text must be approached in full view of the prompt, an intelligent behavior many adults haven’t yet mastered. Reading untrustworthy text generated from vague, ambiguous, incomplete or inappropriate prompts calls upon much more sophisticated comprehension and composition work than we have acknowledged to date.
Literary Reflections on Meaning-Making
Sartre's Self-Taught Man: The Limits of Systematic Knowledge
The Self-Taught Man in Sartre's novel Nausea from the 1940s offers a fictional model of laudable humanist behavior to chew on as we look for our sea legs in this gap. Roquentin, Sartre's central character who is working on writing a history, meets the Self-Taught Man in the library where both men spend their days in solitary study.
Roquentin notices the Man's peculiar reading habits
"One day, seven years ago (he told me he had been
a student for seven years) he came pompously into this reading-
room. He scanned the innumerable books which lined the walls
and he must have said, something like Rastignac, 'Science! It
is up to us.' Then he went and took the first book from the
first shelf on the far right; he opened to the first page, with a
feeling of respect and fear mixed with an unshakable decision.
Today he has reached “L“... He has passed brutally from the study of coleopterae to the quantum theory, from a work on Tamerlaine to a Catholic pamphlet against Darwinism, he has never been disconcerted
for an instant. He has read everything; he has stored up in his
head most of what anyone knows about parthenogenesis, and
half the arguments against vivisection. There is a universe behind
and before him. And the day is approaching when closing the
last book on the last shelf on the far left: he will say to himself,
'Now what?'"
At first, Roquentin admires the Man's ambition to know everything. As he cogitates, however, he finds the Man's method to be more about escape than progress, a means to avoid existential dread, a futile attempt to impose linguistic structure on chaos. Roquentin oscillates between admiration for the man's will-power and pity for the project's inherent absurdity. What will be left for the Man when he fishes the last volume off the shelf? The library offers no escape from fear and trembling.
Parallels to Educational Systems
I’ve talked with teachers who worry about parallels between the dehumanizing and mechanical approach the Self-Taught Man imposed on himself, and what schools as institutions impose on learners. Reading to accumulate an alphabetical structure of meaning isn't all that far from what we do in school. Schools have textbooks, curricula, etc., that order in some logical manner the texts students encounter. Whether these texts make sense, whether they intersect with any knowledge or meaning the student wants or needs to understand or make, these issues are irrelevant. Students have few opportunities to prompt their teachers for what might have meaning for them. It might very well feel refreshing to wield the power of the question.
The gap between AI output and meaning-making in school is unexplored territory. Of course, we know a little about content area textbook comprehension and can draw from this research to apply in the gap. AI output presents risks to intellectual development because it amplifies strategies already permeating the culture of teaching and learning. In the absence of clarity about how LLMs work, students may superimpose their frame of reference for reading textbooks, which is likely underdeveloped at best. Accustomed to being intellectually directed and incentivized to memorize conceptual vocabulary by associating pockets of meaning, students may not grasp the different actions they must take to generate AI output and to act upon it critically and creatively before they take it up in meaning-creation work.
Information Overload and the Search for Meaning
Borges' Library of Babel: Infinite Information, Finite Meaning
"The Library of Babel" (1944) is a short story by Argentine writer Jorge Luis Borges about an infinite library containing every possible 410-page book that could be written using a 25-character alphabet (22 letters, period, comma, space). The story was conceived during Borges' tenure as a librarian in Buenos Aires, a role that likely influenced its thematic preoccupation with infinite knowledge and existential futility,
The library contains every text that has ever been or could be written, every masterpiece, every truth, every scientific discovery. But it also contains an infinite number of nonsense texts: books of random characters, near-misses that differ by a single letter, and meaningless gibberish.
The librarians spend their lives searching for meaningful books among the overwhelming majority of nonsense texts. Some believe certain books contain the secrets of the universe or catalogs of all other books, but finding them is virtually impossible due to the library's infinite scope.
Borges uses this narrative to explore themes of infinity, meaning, and the relationship between order and chaos. It's a meditation on how all possible information exists in potential, but meaning is made by humans operating in the gap between language representations and human meaning.
Floridi's Analysis: Agency Without Intelligence
In an article titled AI as Agency without Intelligence: On ChatGPT, large language models, and other generative models,” Floridi (2023) discusses Borges' story in a fascinating analysis which crescendos to the following point:
"We have gone from being in constant contact with animal agents and what we believed to be spiritual agents (gods and forces of nature, angels and demons, souls or ghosts, good and evil spirits) to having to understand, and learn to interact with, artificial agents created by us, as new demiurges of such a form of agency. We have decoupled the ability to act successfully from the need to be intelligent, understand, reflect, consider, or grasp anything. We have liberated agency from intelligence."
Floridi (2023) asserts that AI's syntactic prowess mirrors humanity's historical attempts to systematize language, beginning with the writing of dictionaries, efforts that inevitably grapple with the same gap between static definitions and dynamic usage. Linguists still must manage the ambiguity of language as a natural phenomenon, as natural as breathing, and language as necessarily a conventional and predictable system of signs and symbols. By reducing language to syntax devoid of semantics, however, LLMs bypass notions of intentionality or comprehension which distinguish human language from any other type of communication—a system of human agency with intelligence. LLMs’ "success" lies in statistical pattern recognition, not in making meaning.
Conclusion: Confronting the Gap in Education
By resisting discussions of AI uses in classrooms, by avoiding using the machines as a way to understand them, we evade the need to talk about the gap between linguistic processing of all sorts and meaning-making. Partly, we are hamstrung because we've assumed that being able to use the right words at the right time in the right ways indicates knowledge or meaning. AI agents without intelligence can now accomplish this feat—every once in a while. Students without awareness of what they are doing too often willingly put their own intelligence on the shelf and settle for words set in syntactic structures that give the appearance of meaning without any awareness or intention at all.
In my view, waiting for the gap to disappear is an unacceptable response. Instead, teachers and researchers must actively engage with this issue as a language and literacy challenge, helping students develop the critical linguistic skills needed to prompt, read, and use AI-generated text to produce meaningful understanding. It’s not hyperbole to think that this call requires a reconsideration of how we understand and teach reading comprehension, critical thinking, writing, and meaning-making in an age where text can be produced without meaning nor social context.
"By reducing language to syntax devoid of semantics, however, LLMs bypass notions of intentionality or comprehension which distinguish human language from any other type of communication—a system of human agency with intelligence. LLMs’ "success" lies in statistical pattern recognition, not in making meaning."
Exactly.