Reading Texts Generated By Agents Without Intelligence
Essay 4 in a series of five Luciano Floridi-inspired texts on LLMs
“According to historians of Greek philosophy, [Lewis] Campbell’s most significant achievement was his contribution to the debate on the stylistic features of Plato’s late dialogues. He invented a philological method to solve the riddle of the chronology of Plato’s dialogues” (Tomasz Mróz, 2019).
Why should it matter when Plato wrote his dialogues? Are they, or are they not, independent texts? Must they be read in order of production? In the 19th century, Lewis Campbell spent decades working on this question. In the 21st century, Tomasz Mróz wrote about Lewis Campbell writing about Plato. In part, Cambell decided it was all a matter of opinion anyway, though he seemed to know himself. But he did nicely crystallize some features of Plato’s signature dialogic style:
“1) An extreme minuteness of distinction, giving rise to new compounds and derivatives, and to the employment of old words with new shades of meaning.
2) An affectation of variety, leading to different modes of expressing the same thing.
3) Combined with these a learned fulness of diction, commanding the resources of the written as well as the spoken language, and moulding old words to the expression of new ideas: e.g. νυμφευτής, ‘one who brings together with marriage’; ‘unwritten’; ἄστροφος, ‘without twisting’.
4) The frequent and familiar use of words denoting physical and mathematical as well as ethical conceptions. To which may be added
5) the tendency to fix in language some of the leading generalizations of philosophy.” (pp. 393-94).
The systematic study of author attribution throughout the 20th century relied on quantitative methods to resolve disputes over anonymous or contested texts. Advancements in transformer-based models in the late 2010s took things a step further. With LLMs technology created a new type of author, a contested author, some say an intruder, expanding the author attribution task to include output from computational-language machines.
In this essay, I want to consider what textual origin might mean for reading comprehension. If human authors have intentions, and if those intentions are inherent in the author’s syntax and semantics, human readers regardless of conscious identification of style features, respond to human authors in human terms.
Humans have been reading authors since the dawn of time. Humans have been reading dataprint generated by LLMs for three years. My argument is threefold:
Authors are humans with distinct stylistic language signatures of an infinitely wide variety; LLMs are computational machines with limited stylistic signatures of a finite variety.
Authors are autonomous agents who continuously evolve unique, stable systems of language expression linked to knowledge and experiences that adapt and expand across time; LLMs are semiautonomous agents that use computational language algorithmically to articulate information in stasis.
Authors are intelligent with perceptual, cognitive, and emotional capacities that enable thinking, learning, and memory ; LLMs are agents without intelligence and must be provided with human thinking and memory to function.
Readers as Agents with Intelligence
Baseball is a classic analogy for reading. The reader is the batter, taking the batter’s stance at home plate. The pitcher, the author, takes the pitcher’s stance on the mound. Heads nod. The pitcher winds up, pitches the ball The batter expertly constructs a sense of where the ball is soon going to be. Through selective noticing and intelligent micro responses, the batter recoils, swings, follows through. The event changes on a dime. The next frame depends entirely on what happened to the ball.
Batters often hear “Keep your eye on the ball!” Eye-movement studies, call this advice into question. Batters tend to move their head more than their eyes in the direction of the ball. Interestingly, they also keep their eyes on the pitcher. To be precise, they watch the pitcher’s arm, the shoulder, noticing the angle, the wrist position, and the grip and the fingers.
Most batters know their pitcher at least a little. They know a power pitcher will throw fast balls over 100mph, sending the ball dead into the strike zone. Control pitchers throw to precise locations; they minimize walks by finding the sweet spot in the strike zone. Knuckleballers are rare, throwing balls with minimal spin at 70mph, a dying breed these days I hear. Foveal gaze stays on the ball through the early portion of the pitch as the batter verifies judgments of speed and trajectory based on experience and intuition. Then the pre-swing prep work, the swing.
This acknowledgement that pitchers have identifiable style at a high-level of abstraction which precedes them to the ball field, that they perform according to the rules and agreements of the sport, that in the moment of a pitch they afford rich sensory information about the speed and trajectory of the ball to batters by way of their hands and arms. There is a grammar and a syntax to reading a pitch just as there is to reading a text. In human-to-human author-reader relations, readers make use of authorial invitations and cues to play ball. In LLM-to-human relations, however, there is no author. The rules, the agreements, the sensory information from experience, the knowledge, the idiographic voice signature from word choice to syntax—there literally is no author.
The point that syntax is far more important in shaping reading comprehension than cultural models of reading allow is critical. In human writing syntax is used to highlight, to emphasize, to surprise. The feelings of the author with respect to the substance the reader notices are revealed partly through choice of syntax. Note well: LLMs are incapable of using syntax to support comprehension because LLMs are unaware that they are communicating. That presents a problem for the reader.
The outsized role of syntax for communication aligns with Pearson's (1974) early study of syntax in third and fourth grader’s reading comprehension performance. He conducted three experiments with sixty-four third and fourth graders looking to see how syntactic complexity affected their comprehension. In experiment one, he compared eight causal forms, e.g, "Because John was lazy, he slept" vs "John was lazy. He slept,” and four adjectival forms—"The tall man liked the girl" vs "The man liked the girl. He was tall.” In experiment two, he had the children rank forms by clarity, and in the third experiment he tested for recall of causal relations.
His key findings suggest that syntax is effective not in relation to its level of complexity, but in relation to the author’s intentions. Key findings included:
Complexity helps: More grammatically complex/longer sentences produced better comprehension
Kids prefer complexity: Children chose more cohesive, complex forms when given options
Chunking wins: People process integrated semantic units rather than breaking down complex structures
Unified storage: Children naturally stored causal relations as unified chunks regardless of input form
An additional implication was emphasized. Vocabulary Implication: Present words in grammatically complex, semantically rich contexts rather than simplified structures. Explicitness of complex forms outweighs processing difficulty.
Pearson drew a distinction between divided attention and genuine comprehension. Readers often appear to be attending to text (i.e., eyes on the ball) while actually engaging in "divided attention" where "the mind takes flights of uncontrolled fancy," as in the case of a batter who sees himself riding on the shoulders of the cheering team, having singlehandedly won the game while hearing the umpire shout” Strike three!”
This notion of divided attention at first looks like a problem for the reader. This research unearthed a central role for the author to make their presence felt in the text, particularly by using syntactic position for emphasis. Comprehension improved not when sentences were simplified, but when they were structured to emphasize meaningful conceptual relationships. He termed this "unified activity" versus "divided activity," where readers engage with the whole semantic structure rather than isolated words in need of linkages. In our baseball analogy, this represents the difference between treating each pitch as a separate event versus understanding the pitch in the context of the entire game strategy.
Why Reading LLM Output is Not the Same as Reading Human Text
Those high school students who take their learning seriously are particularly unsettled by LLMs, I’ve learned through recent student interviews. They fully understand the lack of credibility issues, including two infamous ones, plagiarism and hallucinations. As they prepare for college, some decide to opt out of regular interactions with bots. However, they also worry that this avoidance may leave them unprepared for a future where AI-generated text becomes ubiquitous across academic, professional, and civic contexts.
In their eyes, AI-generated text doesn’t remotely resemble human writing. LLMs have no knowledge; they understand that. LLMs have no intentions; they get it. LLMs are inherently untrustworthy. The LLM text has no beginning and no end, no introductory section, no thesis statement or organizing idea, no statement of purpose. LLM text is alien.
Other reader begin the LLM reading event as though the LLM is an author with an intent. They prompted with information about the chapter they have been assigned to read. The reading experience was pleasant, but when they turned to the end of the chapter and tried to answer the discussion questions, not a single word came to mind. The LLM text felt comprehensible while reading it, but it fell apart. It didn’t stick.
Both the serious student quarantining from AI and the naive student approaching the machine without any sense of a learning posture need LLM pedagogy. The reader in an LLM session begins as a writer. This writer must write prompt with enough clarity and detail that the LLM can adopt the reader’s intention and proceed to generate content suitable for meeting the reader’s needs. Put differently, the reader of LLM text begins the reading process by writing a prompt. The prompt determines the quality of the synthetic text. The LLM works to fulfill the intention.
Creating and teaching a distinct pedagogy of in medias res LLM text reading by building upon what we know about human reading comprehension, educators can provide appropriate experiences for students to find ways to work with an LLM with all its semantic instability while maintaining epistemic agency. This approach would prepare students for a world where reading artificial text effectively is going to be as essential as reading human text.
How AI Writing Reveals Itself: The Hidden Patterns in Machine Text
AI writing has distinctive fingerprints that go far beyond obvious giveaways like the word "delve." Researchers studying millions of texts have found subtle but consistent patterns in how AI uses language. These patterns matter because they reveal systematic flaws in AI texts that require different reading strategies than we use for human-authored content.
Unlike human writers who draw from personal vocabulary words they believe express their experiences, knowledge, and intentions, AI creates text through filling patterns of slots with words accumulating mathematically having nothing to do with semantics or resonance. Understanding these patterns helps us read AI text from a stance of strategic skepticism, recognizing when the LLM is fabricating coherent-sounding nonsense.
PRACTICE: Recognize every word in LLM text is a compromise
Biases Against Clarity in Reading Comprehension
Lack of Perplexity: AI text follows rules that human writing violates. Humans instinctively use "perplexity" to increase their reader's level of attention—how surprising a word choice is. LLM writing has low perplexity; the machine tries not to surprise by picking the most likely word to fill a slot.
A lack of perplexity in a text—meaning its language is overly predictable—can undermine reading comprehension in several predictable ways. Low engagement is inevitable when word choices are clichéd and the text is monotonous. Startling language forces the reader to pause and respond.
PRACTICE: Monitor level of engagement. Try to surprise yourself.
Predictable patterns allow less room for interpretation and critical assessment of the content. Without structural variation, subtle points may blend into a lulling rhythm, making it harder for readers to recall key concepts later.
PRACTICE: Read LLM with a notebook at hand. Write down terms that are emphasized and track their coherence. Use a second LLM to comment on segments that are puzzling.
Lack of Burstiness: Another observable is "burstiness," or how much sentence structure varies throughout a text. Humans naturally write some short, punchy sentences and some long, complex ones. Great writers use structure to emphasize, to vary the pace, to clarify logical connections, and much more (cf: Pearson, 1974). LLMs maintain consistent sentence patterns with a limited range of syntactic functions. This inability to use syntax to reinforce and better connect intended meaning contributes to a perception of blurriness rather than burstiness.
Word Diversity: Researchers have also measured vocabulary diversity. AI tends to reuse the same words and phrases in predictable cycles while human writers naturally vary their vocabulary more organically. Polysemous words (words with rich variations in semantics and resonance) are more difficult for LLMs because LLMs prefer the common over the outlier. Humans prefer precision.
Transitional Misfires: AI has a serious addiction to formal connecting words. Research shows "additionally" appears 88% more often in AI text than human writing. "Moreover" and "furthermore" follow the same pattern. AI clusters these transition words at the beginning of sentences with mechanical regularity.
The problem is these words are not used with the care of human writers. LLMs make no distinction between "moreover" and "furthermore" though effective human writers understand that "moreover" consistently signals "additional supporting information," with specific functions varying from simple reinforcement to logical escalation to consequential implications, depending on both its position in the sentence and the nature of the information it introduces. LLMs always use these words to start sentences, never in the middle, never at the end.
Prepositional Phrases: AI also loves overwrought prepositional phrases. Instead of saying "during," it says "throughout the course of." Instead of "because," it chooses "by virtue of." The phrase "in the realm of" has become so associated with AI that it's practically a meme among researchers. Prepositional phrases involve relational words that shape semantics across nouns and verbs, core carriers of meaning in a sentence. When they are used haphazardly, depending on a probability of fit rather than logic, micro-distortions in meaning can accumulate and result in comprehension problems.
The Hedging Problem: When AI Can't Commit
Perhaps nothing reveals the problem of camouflaging meaning to decrease statistical risks more clearly than excessive hedging. AI systems can't properly calibrate how confident they should sound, so they hedge everything with the same moderate uncertainty.
Common AI hedging phrases like "it seems" and "it appears" appear three times more frequently in LLM text than human writing. "Perhaps" and "possibly" cluster in conclusions, and "in general" and "generally speaking" are overused as safety mechanisms. "Quite" and "rather" appear in nearly half of AI paragraphs.
Researchers call this hedging phenomenon "false balance." AI presents settled facts with unnecessary uncertainty. It adds "on one hand... on the other hand" even for uncontroversial topics. Where humans vary their certainty based on actual knowledge, AI maintains artificial politeness and caution across all subjects.
The Structure Problem: Too Perfect to Be Human
AI text follows rigid structural patterns. Essays begin with broad topic statements, use identical conclusion phrases ("In conclusion..."), and transition between paragraphs in predictable ways. The more AI tries to sound connected with discourse markers, the less natural the flow becomes.
Grammar analysis reveals deeper patterns. AI uses more auxiliary verbs, numbers, and symbols than human writers. It prefers certain sentence structures so consistently that these preferences create identifiable fingerprints that survive even when the text is heavily edited.
Reading LLM Text After Prompting
What may seem like an obvious and insignificant difference between LLM text and human text derives from Floridi's argument that LLMs are artificial agents without intelligence. Recall Floridi's three observables separating digital pattern matching from intelligence: interactivity, adaptability, and autonomy.
LLM texts don't exist without human prompts. Any intelligent intention is expressed in human language within the prompt. Relying on patterns presented in the prompt, the LLM projects a text that probably responds in something like a sensible fashion. This fundamental dependency means that "in medias res" reading of AI text is actually a collaborative process between human intention (the prompt) and machine pattern-matching (the output).
Understanding these patterns changes how we read AI-generated content. When you see excessive transition words, they're not actually signaling logical connections—they're statistical filler. When AI hedges everything with "it seems" or "perhaps," it's not expressing genuine uncertainty about the content—it's following training to avoid definitive statements.
This means traditional reading strategies that look for authorial intent, emphasis, and sustained argument become less useful with AI text. Instead, we need to read AI text more like we'd read a database query result, extracting information while recognizing that the presentation patterns don't carry the same meaning they would in human writing.
Why Human Reading Comprehension Differs Categorically from LLM Processing
The distinction between human and machine text processing ultimately centers on what cognitive scientists call the "prior knowledge effect" in reading comprehension. Human readers don't simply decode symbols—they actively construct meaning by integrating incoming information with personal experience, cultural understanding, and domain-specific knowledge accumulated over years. This integration happens through two key processes: assimilation (fitting new information into existing mental frameworks) and accommodation (restructuring our understanding when new information doesn't fit current schemas, sometimes building new ones).
LLMs, by contrast, have no prior knowledge not even in the sense of a hard drive—only words that fit statistical patterns derived from training data. They cannot assimilitate new information into personal experience or accommodate conflicting ideas by restructuring beliefs. Their "understanding" is pattern completion, not meaning construction. They cannot learn in the human sense.
Toward a Pedagogy of "In Medias Res" Reading
Given these fundamental limitations, we must develop pedagogical frameworks that preserve epistemic integrity while building necessary digital literacy skills. The "in medias res" approach I propose would train readers to engage with AI text through strategic interaction and prepare them to treat LLM text as a hostile witness.
This pedagogy would emphasize several key observables:
Pattern Recognition Over Meaning-Making: Students learn to identify the fingerprints of artificial text, preparing them to adjust their reading approach. Rather than seek meaning in transition words or hedging language, students learn to read through these surface patterns to extract potentially useful information while maintaining skeptical distance from the text's apparent coherence.
Epistemic Vigilance: Students develop heightened awareness of when AI-generated claims require verification. They learn to distinguish between tasks where AI pattern-matching might be useful (formatting, brainstorming, initial organization) and contexts requiring genuine expertise or nuanced understanding. This vigilance becomes as important as traditional academic integrity.
Strategic Prompting: Since AI text generates from human prompts, students learn to understand AI interaction as a co-operative process where the quality of output depends heavily on the precision and strategic design of the prompt. This prompting shifts the focus from passive reading to active engagement with the AI system as a tool.
Verification Protocols: When students do decide to use AI-generated information, they learn systematic approaches for triangulating AI content with human sources. These might involve checking AI claims against primary sources, consulting expert human opinions, or using AI output as starting points for further research rather than endpoints.
Contextual Application: Students learn to recognize appropriate contexts for AI interaction. Quick information gathering, initial experimental drafts, and brainstorming sessions might benefit from AI assistance, while complex tasks must be undertaken with careful planning and self-monitoring, revisiting the intention of the tool usage.
Assessment and Implementation
A successful "in medias res" reading pedagogy would require new forms of assessment that evaluate students' ability to navigate the distinction between human and AI text effectively. Rather than testing whether students can detect AI text, assessments would measure whether students can extract useful information from AI sources while maintaining appropriate skepticism and verification practices.
Teachers would need training in both the technical aspects of AI text patterns and the cognitive science underlying different reading approaches. Professional development would emphasize how to scaffold students' transition between immersive reading of human texts and strategic interaction with AI-generated content.
Classroom activities might include exercises where students practice identifying AI text patterns, comparing AI and human treatments of the same topic, and developing effective prompting strategies. Students would learn to document their verification processes when using AI sources, making their epistemic reasoning visible and assessable.
The Stakes of Literacy in an AI-Mediated World
The goal isn't to make students better consumers of AI content, but to make them more discerning readers who understand when they're encountering the hollow patterns of artificial agency versus the meaningful intentions of human intelligence. Students who master both immersive reading of human texts and strategic interaction with AI systems will be better prepared for academic, professional, and civic participation in an AI-mediated world.
Those who avoid AI interaction entirely may find themselves unprepared for contexts where AI-generated content is ubiquitous and unavoidable. Conversely, students who engage with AI text using traditional reading approaches designed for human authors may gradually lose their capacity for the deep, sustained engagement that human-authored texts reward and require.
At a place in human history where the distinction between pattern-matching and genuine understanding may determine the difference between meaningful learning and sophisticated confusion, such pedagogical clarity represents not just educational innovation but a defense of human intelligence itself. The students who learn to navigate both domains—immersive engagement with human thought and strategic interaction with artificial agents—will preserve what is most valuable in human literacy while gaining essential skills for an AI-integrated future.
The alternative—watching students either retreat from AI entirely or lose their capacity for deep human-to-human intellectual engagement—risks creating a generation unprepared for traditional scholarship and emerging technological realities. The "in medias res" approach offers a path forward that respects the human mind as the Olympian it is and the practical necessities of an AI-mediated world. Recalling Lewis Campbell’s description of a feature of Plato’s signature style in the Dialogues, the signature of human intelligence enacted in words:
“An extreme minuteness of distinction, giving rise to new compounds and derivatives, and to the employment of old words with new shades of meaning.”
Sources
https://techxplore.com/news/2025-02-differences-human-ai-generated-text.html#google_vignette
https://royalsocietypublishing.org/doi/10.1098/rsos.241313
https://mededu.jmir.org/2025/1/e62779
https://www.ctl.ox.ac.uk/ai-tools-in-teaching
https://dzone.com/articles/llm-reasoning-limitations
https://www.pnas.org/doi/abs/10.1073/pnas.2422455122
https://autogenai.com/apac/blog/differences-between-human-writing-and-ai-writing/

"The "in medias res" approach I propose would train readers to engage with AI text through strategic interaction and prepare them to treat LLM text as a hostile witness."
The following paragraphs thoroughly explain this approach.
Excellent points. I checked with Google's AI and it begs to differ!😂