Why LLMs Matters Despite Their Agency without Intelligence
The Second Essay in a Series of Five Drawn from Luciano Floridi's LoA Model
Introduction: A Theoretical Convergence
The integration of Large Language Models (LLMs) into writing instruction presents a unique opportunity to revisit foundational theories of writing development through new analytical lenses. Consider two assignments from an eighth grade history class:
Assignment 1: Write a one-page report on the causes of the American Revolution. Include at least three key events that led to colonial independence and explain what happened in each.
Assignment 2: Analyze how the American Revolution might have unfolded differently if France had not entered the war. Develop an alternative historical scenario and explain how it would have changed the outcome for both Britain and the colonies.
The first assignment calls for knowledge-telling—retrieving from sources and reporting established information about Revolutionary War causes. The second requires knowledge-transformation—learners must understand military strategy, international relations, and revolutionary politics as they are established in history to develop new insights about historical contingency.
If you're wondering whether that essay your student just turned in was written by ChatGPT, you're asking the wrong question. The right question is did they learn anything while writing it.
In this essay, we examine how Bereiter and Scardamalia's (1982) influential distinction between these knowledge-telling and knowledge-transforming writing processes might be reframed using Luciano Floridi's Method of Abstraction discussed in essay one in this five-part series. This convergence of cognitive writing theory and philosophical methodology offers teachers a powerful sociocultural framework for understanding how LLMs interact during different levels of abstraction.
Tracking Observables
Bereiter and Scardamalia's research identified two categorically different, highly abstract approaches to writing: a) knowledge-telling and b) knowledge-transforming. Floridi's Method of Abstraction provides a rigorous approach to modeling complex systems by identifying observables at varying levels of abstraction, their possible values, and the predicates that govern their behavior. By applying Floridi's method to Bereiter and Scardamalia's categories of writing, we can begin to explore more precise understandings of how LLMs—agents with valuable capabilities but no intelligence—participate in these distinct writing processes.
Part I: Knowledge-Telling with LLMs
Knowledge-telling, as Bereiter and Scardamalia described it, represents the more spontaneous approach to writing, where writers essentially "tell what they know" or find out what they need to know about a topic, drawing directly from memory or external memory stores without significantly reconstructing their understanding. This process follows what they described as a "think-say" pattern: the writer retrieves content based on topic cues and then puts this content into appropriate language. The resulting text typically reflects the structure of the writer's existing knowledge rather than a reorganization prompted by rhetorical goals.
In our Revolutionary War causes assignment, knowledge-telling would involve students retrieving information about events like the Boston Tea Party, the Stamp Act, and the Boston Massacre, then organizing this information chronologically or by importance. Without rhetorical goals, a student thinks:
"The assignment asks for causes of the American Revolution. I know about the Stamp Act, the Boston Tea Party, and the Intolerable Acts. I'll write about each one."
The resulting text follows their mental storage pattern:
Paragraph 1: The Stamp Act happened in 1765...
Paragraph 2: The Boston Tea Party happened in 1773...
Paragraph 3: The Intolerable Acts happened in 1774...
When we apply Floridi's Method of Abstraction to knowledge-telling with LLMs, we can develop a Level of Abstraction (LoA) model that identifies the key observables in this system from a distance and from close-up.
Observables in Knowledge-Telling with LLMs
Content Retrieval represents how information is accessed for writing purposes. In human writers, this access involves memory search; in LLM-assisted writing, it involves both human memory and pattern retrieval from the LLM's training data. For the Revolutionary War assignment, retrieval might involve retrieving dates, participants, and consequences of key events.
Topic Association captures how closely the generated content relates to the assigned topic. This observable reflects the writer's ability to maintain focus and the LLM's capacity to generate relevant responses to prompts. In the Revolutionary War assignment, staying focused on causes rather than drifting into descriptions of battles or biographical details would indicate effective topic association.
Discourse Structure represents the organizational patterns applied to content, including genre conventions, rhetorical patterns, and text structures appropriate to specific writing purposes. For the Revolutionary War causes report, students might select a chronological structure or organization by types of grievances (economic, political, social). Or they might scramble the organization as a means to disguise their use of an LLM in certain contexts.
Agency Distribution indicates who or what initiates and controls the writing process, ranging from primarily human-directed to primarily LLM-generated content. A student might use an LLM to generate initial descriptions of complex events like the Intolerable Acts, then edit and integrate these into their own writing.
Coherence Management reflects how the developing text maintains internal consistency. Maintaining logical connections between ideas and ensuring smooth transitions between sections are observables that speak to this level of abstraction. In the Revolutionary War report, clearly connecting how each event contributed to growing colonial resistance would indicate some degree of this feature depending on the unique assignment response.
Typed Variables and Their Values
Each of these observables can take on specific values. For Content Retrieval, we might observe direct reporting in straightforward transmission of existing knowledge, prompted extraction with textual cues as residue indicating guided retrieval of specific information, associative generation vis a vis production of content based on topic associations, or a synthetic combination, the merging of information from multiple sources. In the Revolutionary War assignment, a student might directly report the standard narrative of the Boston Tea Party as a holistic summary of the causes.
Topic Association might range from tightly focused, i.e., consistently relevant to the central topic, tangentially related as in connected but somewhat peripherally, loosely associated by showing limited relevance, to topic drifting gradually moving away from the original subject. A tightly focused response would stay specifically on pre-war causes rather than drifting into descriptions of Revolutionary War battles.
Discourse Structure could appear as linear organization (simple sequential arrangement), conventional patterning (following standard genre expectations like a five-paragraph essay), adaptive structuring (modified to suit specific content), or fragmented arrangement (lacking clear organizational logic). The Revolutionary War report would likely mirror conventional historical narrative patterns, moving from early to later causes.
Agency Distribution might be observed as human-directed (primary agency with the human writer), collaborative distribution (shared agency between human and LLM), LLM-predominant (majority of content decisions made by the LLM), or fully delegated (complete reliance on LLM-generated text).
Coherence Management could manifest as local coherence (consistency within paragraphs), sequential coherence (logical connections between adjacent sections), global coherence (overall unity of the entire text), or thematic integration (organization around central themes). For the Revolutionary War report, sequential coherence would be particularly important to show escalating tensions between Britain and the colonies.
Predicates in Knowledge-Telling with LLMs
The predicates in this LoA establish how these observables relate to each other in knowledge-telling with LLMs:
When Agency Distribution shifts toward LLM-predominant, Content Retrieval tends toward associative generation based on statistical patterns rather than intentional knowledge selection. In the Revolutionary War assignment, an LLM might include tangentially related information about colonial life that wasn't specifically about causes of the war.
Topic Association remains consistently stronger when Agency Distribution maintains significant human direction, regardless of the sophistication of the LLM. Students who carefully direct LLM prompts about specific Revolutionary War causes will receive more focused responses than those who ask for general colonial history.
Discourse Structure typically achieves conventional patterning even with minimal human input, as LLMs excel at reproducing genre conventions through pattern recognition. An LLM can easily produce a standard historical report format based on countless similar explanations in its training data.
Coherence Management at the global level requires human oversight regardless of how much content is LLM-generated, as LLMs lack the goal-directed thinking needed to maintain overall unity. LLM instructions given with little awareness of just a single word can result in a change in goal. Students need to ensure that their report maintains a clear narrative arc from early grievances to the Declaration of Independence. Such an observable requires human ownership.
When Content Retrieval relies primarily on LLM generation, historical accuracy depends entirely on the quality of the LLM's training data rather than the writer's knowledge validation. An LLM might reproduce common misconceptions about Revolutionary War causes if these appear in its training data, requiring student verification against reliable sources.
This Level of Abstraction reveals important insights about knowledge-telling with LLMs. The process leverages the complementary capabilities of human and artificial agents: humans provide topical guidance and coherence oversight, while LLMs contribute fluent text generation and genre awareness. However, this arrangement also creates specific risks, particularly when agency shifts too heavily toward the LLM, potentially compromising topic focus and historical accuracy.
For teachers, this LoA offers a framework for understanding how students might engage with LLMs in more basic writing tasks like the Revolutionary War report. It highlights that even in knowledge-telling, human agency remains crucial for ensuring relevance, accuracy, and overall coherence. Students using LLMs for knowledge-telling may produce superficially impressive reports about Revolutionary War causes that nonetheless lack depth or historical precision if they delegate too much agency to the artificial system.
Part II: Knowledge-Transforming with LLMs
Knowledge-transforming in Bereiter and Scardamalia's model represents a more complex approach to writing where the composition process itself becomes a means of developing understanding. Writers engaged in knowledge-transforming move between content and rhetorical problem spaces, using writing to discover new connections, resolve contradictions, and restructure their knowledge. This dialectical process results in learning through writing, with the final text reflecting not just what the writer knew at the outset but what they came to understand through composition.
Our second history assignment—analyzing how the American Revolution might have unfolded without French intervention—exemplifies knowledge-transforming writing. To complete this task, students must integrate their understanding of military strategy, international diplomacy, and revolutionary politics, potentially transforming their understanding of historical causation and contingency.
Applying Floridi's Method of Abstraction to knowledge-transforming with LLMs yields a different set of observables that capture this more complex process:
Observables in Knowledge-Transforming with LLMs
Problem Representation captures how writing challenges are conceptualized. This includes both content problems and rhetorical problems. Rhetorical goals are the writer's intentions for how their text will affect readers—what they want their writing to do in the world, not just what information it contains. These goals go beyond simply conveying information to consider:
Purpose: What change do I want to create in my reader's understanding, beliefs, or actions?
Audience: Who specifically am I trying to reach, and what do they need from this text?
Effect: How do I want readers to think or feel after reading?
Strategy: What approach will best achieve my purpose with this audience?
Rhetorically-Driven Approach: A student thinks:
"I want to convince my reader that the Revolution was inevitable given British policies. My teacher probably thinks it could have been avoided, so I need to show how each British action made reconciliation less possible."
The resulting text is reorganized around this rhetorical goal:
Paragraph 1: Once Britain began taxing without representation, a precedent was set that made conflict inevitable...
Paragraph 2: Each British response to colonial protest escalated rather than resolved tensions...
Paragraph 3: By 1774, positions had hardened beyond the possibility of compromise...
In the alternative history assignment, students must represent both the historical problem (how French support affected the war's outcome) and the rhetorical problem (how to construct a plausible counterfactual argument).
Dialectical Movement represents the back-and-forth between content and rhetorical considerations that characterizes knowledge-transformation. Addressing rhetorical goals leads to reconsidering content. In developing their alternative scenario, students might recognize that presenting a convincing argument requires deeper understanding of both French military contributions and British strategic options. A student might begin with a simple assumption that the colonists would have lost without France but develop a more nuanced understanding of various possible outcomes through the writing process.
Metacognitive Regulation indicates awareness and control of thinking processes during writing. Monitoring comprehension, evaluating ideas, and directing the mining of cognitive resources are basic regulation skills. Students working on the alternative history assignment must recognize gaps in their understanding and determine how to address them through research or reasoning.
LLM Integration shows how artificial systems participate in the knowledge-transforming process. LLM outputs are incorporated into the dialectical movement between content and rhetorical problem spaces. A student might use an LLM to generate potential consequences of no French intervention, then critically evaluate these suggestions against their own historical understanding.
Typed Variables and Their Values
Problem Representation might appear as a) content-focused, primarily concerned with subject matter, b) rhetorically-oriented, centered on communication goals, c) integrated, balancing content and rhetorical concerns), or d) fragmented, disconnected treatment of content and rhetorical issues. A student might begin with a content-focused representation of French military contributions before considering how to persuasively present their alternative scenario.
Dialectical Movement could range from unidirectional (moving only from content to expression), alternating (switching between content and rhetorical considerations), recursive (repeatedly revisiting both problem spaces), to transformative (fundamentally reshaping understanding through integration of problem spaces). A student developing the alternative history might move recursively between understanding French contributions and developing effective ways to present counterfactual reasoning.
Knowledge Restructuring might be observed as connection-forming (linking previously separate ideas), contradiction-resolving (addressing inconsistencies in understanding), concept-refining (clarifying and developing key ideas), or framework-reconstructing (fundamentally reorganizing conceptual structures). Students might form new connections between their understanding of 18th-century naval power and colonial supply lines to develop their alternative scenario.
Metacognitive Regulation could manifest as process awareness (recognition of cognitive activities), strategic planning (intentional approach to writing tasks), reflective monitoring (ongoing evaluation of writing progress), or adaptive control (adjusting strategies based on feedback). Students must monitor their developing understanding of historical contingency, recognizing when they need to research specific aspects of French involvement or British capabilities.
LLM Integration might range from tool-based (LLM used for specific, limited functions), dialogical (interactive exchange between writer and LLM), scaffolded (LLM providing developmental support), to co-constructive (shared role in development of human understanding). A student might engage in dialogical interaction with an LLM, asking it to suggest potential consequences of no French naval support, then evaluate, build upon, modify, or reject these suggestions.
Predicates in Knowledge-Transforming with LLMs
The predicates in this LoA establish the relationships between observables in knowledge-transforming with LLMs:
Genuine Knowledge Restructuring occurs only when Metacognitive Regulation remains primarily under human control, regardless of the level of LLM Integration. Students must maintain awareness of their own developing understanding of historical contingency rather than simply accepting LLM-generated alternative scenarios.
Dialectical Movement cannot achieve transformative levels without human intelligence directing the process, as LLMs lack the capacity to move between problem spaces without direction. While an LLM might generate content about potential British strategies, it cannot navigate between understanding this content and determining how to effectively argue for its plausibility without human guidance.
When LLM Integration reaches co-constructive levels, Problem Representation must remain integrated under human guidance to prevent the appearance of transformation without actual cognitive development. Students who extensively use LLMs to develop their alternative history must maintain clear awareness of both historical content and argumentative dimensions of the task.
Knowledge Restructuring at the framework-reconstructing level requires explicit articulation of conceptual changes, regardless of how extensively LLMs are integrated into the writing process. Students should articulate how their understanding of the Revolution's contingent nature has developed through the writing process rather than simply presenting their final scenario.
Metacognitive Regulation determines the quality of Knowledge Restructuring (what happens in the mindscape) more significantly than the sophistication of LLM outputs. A student with strong metacognitive awareness who critically evaluates LLM suggestions about alternative Revolutionary War outcomes will develop more sophisticated historical understanding than one who uncritically incorporates polished LLM content.
This Level of Abstraction reveals crucial insights about knowledge-transforming with LLMs.
While LLMs can participate in this more complex writing process, they cannot themselves engage in knowledge transformation because they have no knowledge.
However, they can support human knowledge transformation by generating alternative perspectives, articulating implicit connections, or providing scaffolding for metacognitive processes. All of these LLM services are available because LLMs can gamble on the next word based upon letter strings co-occuring in semantic neighborhoods.
For teachers, this LoA offers a framework for guiding more sophisticated writing tasks like the alternative history assignment that involve LLMs. It emphasizes that knowledge-transforming remains exclusively a human cognitive process even when artificial systems participate. LLMs simply cannot transform knowledge. Students engaging in knowledge-transforming writing must maintain metacognitive control, actively directing the dialectical movement between problem spaces, and explicitly articulate their developing understanding rather than simply incorporating LLM-generated content about alternative historical scenarios.
Pedagogical Implications of the LoA Framework
The convergence of Bereiter and Scardamalia's writing models with Floridi's Method of Abstraction creates powerful analytical tools for teachers figuring out the integration of LLMs into writing instruction. By explicitly identifying the observables, variables, and predicates in both knowledge-telling and knowledge-transforming with LLMs, teachers can develop more precise understandings of how these artificial agents influence writing processes—both their affordances and their deceptive pitfalls.
The Dark Side: Typed Variables that Inhibit Authentic Learning
Our framework reveals several "darker" typed variables that actively undermine cognitive development when LLMs are integrated into writing processes:
Fully delegated agency represents perhaps the most dangerous configuration, where students surrender all decision-making to the LLM. In our Revolutionary War causes assignment, a student could prompt an LLM to "write a report on causes of the American Revolution" and submit the result with minimal engagement. The resulting text may appear competent, even sophisticated, but represents zero cognitive development for the student.
Fragmented problem representation occurs when students fail to integrate content and rhetorical challenges, often because LLMs can generate coherent text without this integration. A student might use an LLM to produce plausible alternative history scenarios without ever grappling with the deeper question of historical causation and contingency.
Unidirectional dialectical movement emerges when the appearance of knowledge-transforming writing masks a lack of the desired cognitive work—despite the cognitive work expended to create the mask. Students can prompt LLMs to generate text that mimics the back-and-forth of sophisticated thinking—"On one hand... yet on the other hand..."—without experiencing any actual dialectical tension or resolution.
Surface-level metacognitive regulation represents perhaps the most insidious threat: students who learn to perform metacognition rather than practice it. They might include reflective statements generated by LLMs about their "learning process" without having experienced any genuine self-awareness or cognitive monitoring.
These darker variables create what we might call "cognitive bypass"—the appearance of learning without its substance. Students produce texts that meet all surface criteria for knowledge-telling or even knowledge-transforming writing while experiencing no actual cognitive development.
The Bright Side: Robust Variables that Support Knowledge Work
However, our framework also identifies robust typed variables that reveal genuine affordances of LLMs for knowledge consolidation and extension:
Dialogical LLM integration can support authentic learning when students engage LLMs as thought partners rather than ghostwriters. A student developing an alternative history of the Revolution might use an LLM to generate multiple scenarios, then critically evaluate each against their historical knowledge, deepening their understanding of contingency.
Tool-based agency distribution maintains human control while leveraging LLM capabilities for specific functions. Students might use LLMs to quickly generate lists of French military contributions, then synthesize and evaluate this information themselves, consolidating their factual knowledge while maintaining cognitive engagement.
Scaffolded knowledge restructuring occurs when LLMs help students articulate connections they sense but cannot yet express. A student might recognize that French naval power was crucial but use LLM-generated explanations to better understand and articulate the relationship between naval supremacy and colonial supply lines.
Strategic metacognitive regulation emerges when students consciously direct their use of LLMs based on their learning needs. They might recognize when they need factual support versus when they need to work through confusion independently, using LLMs strategically rather than reflexively.
These robust variables suggest that LLMs can serve as powerful tools for knowledge work when integrated thoughtfully into writing processes that maintain human cognitive engagement. This analysis points to an urgent pedagogical need: the systematic use of metacognitive logs that document not just what students write but how they think while writing. These logs should capture:
Decision points where students chose to engage or not engage LLMs
Moments of confusion and how they were resolved
Instances where LLM outputs challenged or confirmed existing understanding
Explicit articulation of how understanding changed through the writing process
Reflection on the cognitive work performed versus delegated
Teaching Writing When Machines, that Can’t Think, Can Write
The LoA framework highlights a challenge: LLMs can produce the surface features of both knowledge-telling and knowledge-transforming texts without any of the cognitive work these texts traditionally represented.
This new circumstance forces us to reconsider what we're actually teaching when we teach writing.
For knowledge-telling, the traditional goal was clear communication of existing knowledge. Students learned to retrieve information, organize it logically, and present it coherently, often in the space of five paragraphs. Now an LLM can do this in seconds. Our analysis shows that human judgment is crucial even to knowledge-telling—deciding what's relevant, verifying what's accurate, and ensuring the text serves its actual purpose. These aren't generic "critical thinking" skills; they're specific competencies in managing information flow in human discourse.
For knowledge-transforming, the stakes are higher. The dialectical movement between content and rhetorical problems—the very mechanism through which writing develops thought—cannot be delegated to a system without understanding. Yet LLMs can mimic this movement textually, producing phrases like "upon further reflection" or "this raises the question" without any reflection or questioning occurring.
This creates a sort of pedagogical quicksand. How do we teach knowledge-transforming when its textual markers can be faked and we can be deceived? Our analysis suggests focusing on moments of genuine cognitive friction—when students discover they don't understand something they thought they did, when they find contradictions in their thinking, when they struggle to articulate intuitions. These moments can't be delegated or accelerated.
The metacognitive logs aren't just assessment tools—they're where the real writing happens now. The struggle to articulate why you're confused, the attempt to explain what changed in your understanding, the effort to track your own thinking—this is where students learn to use language to develop thought. The final essay becomes almost incidental, a by-product rather than the purpose.
Much of the current discourse is about protecting some nostalgic ideal of writing. Floridi, speaking to Bereiter and Scardamalia over the decades, tells us it's about recognizing that in a world of instant text generation, writing as a tool for thinking becomes more valuable, not less.
The student who can recognize when they need to work through confusion themselves, who can articulate how their understanding develops, who can use LLMs strategically without delegating their metacognitive authority—this student possesses a form of literacy adapted to our current moment.
This kind of writing instruction calls for all hands on deck. The important lesson from applying the LoA framework, it seems to me, is that students must be taught to value and develop their own cognitive processes more highly than their products, even when grades are the currency of the realm, not because machines are threats, but because the ability to think through writing remains distinctly human and existentially necessary.
Discussion Questions for Educators
1. Practical Implementation of Metacognitive Logs How might metacognitive logs be structured to capture authentic cognitive work without becoming performative exercises themselves? Consider: What specific prompts would elicit genuine reflection rather than surface-level responses? How can we distinguish between a student who is actually confused and working through ideas versus one who has learned to perform confusion? What would a rubric for evaluating cognitive friction look like?
2. Shifting Assessment Paradigms If metacognitive logs become primary sites for assessment, how do we address the institutional pressures that still privilege polished final products? Consider: How do we communicate to parents and administrators that a messy, struggle-filled log might indicate more learning than a polished essay? What happens to traditional grading systems when the process becomes more valuable than the product? How do we ensure equity when some students are more naturally reflective or articulate about their thinking processes?
3. The Authenticity Challenge Given that LLMs can generate plausible metacognitive reflections (e.g., "Upon reflection, I realized that my initial understanding was limited..."), how do we verify the authenticity of students' documented thinking processes? Consider: Should metacognitive logs be handwritten or created in controlled environments? How do we teach students to value their genuine confusion and struggle when AI can fake sophisticated self-awareness? What role might in-person conferences play in validating the cognitive work documented in logs?
One point you might consider here, Terry, is the effect of providing certain texts to the LLM on which to base its responses. Otherwise, of course, it draws from its well of training data which can be, depending on the topic, quite vast or quite narrow. In the American Revolution example, unless it's a research assignment, students are likely to have been assigned certain texts (a textbook chapter, perhaps some primary sources, a video or a website, timeline, etc....) on which to base their response. There are possibilities here to restrict the LLM to use only the data provided to the student so as not to overwhelm them. Additionally, with the use of a pre-set GPT or other set of instructions, the teacher can constrain the output from the LLM in ways they might prefer. In essence, when you combine some of the different capabilities of the tools, the potential for different ways to impact the writing process becomes more complex.
There is a lot here! But extremely thoughtful.
The two takeaways I am ruminating on:
1. Writing as “output-production” versus a metacognition exercise (with the emphasis on exercise)
2. The recursive problem where any proof of work can always be faked because it is all word-generation, whether it is actual prose or a record of cognition.
I go back to my time at Oxford reading papers aloud while being challenged by the prof. The paper didn’t mean squat, it was just a rhetorical substrate for discourse.