I think, therefore, I know what I know; I offload, therefore, I raise the potential of knowing more (or less)
Imagine watching a student struggle through their AP Literature exam, frantically trying to recall basic plot points from Hamlet while the clock ticks down. They had spent months relying on digital notes and character maps, never internalizing the play's fundamental structures. Meanwhile, down the hall, another student is using AI to analyze protein folding patterns—a task that would have been impossible without computational assistance.
These two scenes crystallize something that's been bothering me about our conversations around technology and thinking: we're using the same term in public discourse—cognitive offloading—to describe fundamentally different relationships between mind and machine.
The phrase "cognitive offloading" has become so ubiquitous that we've stopped questioning what it actually means. We toss it around in faculty meetings, research papers, and worried op-eds about AI's impact on education, assuming we're all talking about the same thing. We're not. This semantic laziness isn't just a linguistic pet peeve—it's actively hindering our ability to make informed decisions about when and how to integrate technology into learning environments.
A recent Microsoft study perfectly illustrates this confusion. Its headline finding—that people check AI outputs less frequently when they trust them—was treated as revelatory evidence that AI is "corroding critical thinking." But this conflates two distinct phenomena: the offloading of information storage (which we've been doing since the invention of writing) and the delegation of thinking processes themselves. Without this distinction, we're left with blunt instruments to address nuanced problems, leading to either technophobic panic or uncritical adoption.
The Microsoft study published in late January, 2025, is a good example of the communicative impact of the semantic neglect of this core concept. Its take-to-the-bank finding is absurdly obvious: When you’re confident in the output of the AI, you go to the effort of checking it less frequently. Do we need a study to tell us what is true on its face? When you’re confident in a thing, you worry less about it. You come to rely on it. Does that mean AI is corroding our ability to think critically as defined by Bloom’s taxonomy and self-reported on this survey of 319 “knowledge workers”? We seem to be hungry for data to support our fears of cognitive erosion and prefer to turn away from the challenge. I don’t think we have the luxury of this option.
I propose that educators need to distinguish between cognitive offloading—the externalization of information storage and retrieval—and what I call cogitative offloading—the delegation of active thinking processes to external agents. This distinction isn't academic; it's essential for understanding which aspects of thinking we should preserve internally and which we can responsibly externalize in our increasingly AI-augmented world. Only by developing more precise language about human-technology cognitive partnerships can we design educational experiences that enhance rather than diminish human intellectual capacity.1
Coming to Terms
It’s taken several months, but I think I’m zeroing in on what has been bothering me. It’s similar to what bothers me about the term “critical thinking,” i.e., the contradictory laminated meanings we carry around in mental plywood boxes have to be updated in this new technological reality.
New terminology emerges precisely because existing frameworks inadequately capture important nuances. By arguing that this distinction as not reflected in scholarly literature and that it is therefore unimportant, one essentially argues against the possibility of refining our conceptual vocabulary, in itself an indefensible anti-intellectual position.
The distinction between cognitive and cogitative offloading matters because it reframes how we understand human-technology relationships. While cognitive offloading concerns in eduction address primarily potential negative impact on the storage and retrieval of information, cogitative offloading involves delegating the active, effortful thinking processes we use to excavate, analyze, elaborate, supplement, and in some cases transform information into immediately useful understandings which are instrumental to success but unnecessary to remember.
Screen-sharing with a technician is an example of cogitative offloading. Such sharing isn’t about shifting the location of information in human memory—the technician has it, the client doesn’t. It’s about redistributing the mental labor of problem-solving across human agents and technological interfaces. When you share your screen with a technician, you're engaging in a form of cogitative offloading that differs subtly but significantly from cognitive offloading. This distinction illuminates the relationship between mind, action, and technological assistance.
Cogitative offloading involves delegating the active, deliberate process of engaging in the moment with problems/tasks using more expert external resources. This form offloads the mental motion itself. Rather than offloading the work of transferring information useful in the moment into memory stores (cognitive offloading), which the human would have no hope of accomplishing in the brief time spent with a virtual visit from a technician, the human transfers the active problem-solving process to another agent. The technician doesn't access data; they engage in the deliberate mental work of diagnosis, solution identification, and implementation that you would otherwise perform if you had the knowledge base.
This relationship creates a unique dynamic where your thinking process becomes distributed between yourself and the technician. Neither of you could succeed without the other. The screen serves as a shared cognitive frame of reference and activity—a thinking space where the technician can observe, analyze, and manipulate elements while simultaneously communicating with you and requesting more information. Unlike cognitive offloading to static tools (like saving a phone number to avoid memorizing it), cogitative offloading involves transferring the dynamic processing activity itself.
This perspective enriches our understanding of digital collaboration tools beyond information-sharing and retention, which is testable by way of isolated examination—they become extensions of our thinking processes, enabling forms of distributed cognition that blur the boundaries between individual and collective problem-solving activity.
Getting Specific
This section provides two examples of cognitive offloading where the objective is to build and store knowledge in longterm memory, one appropriate, one inappropriate, along with my reasoning for the judgment. Please let me know my blind spots in reasoning.
AP Literature Exam Preparation
A student preparing for their AP Literature exam consistently uses digital notes instead of engaging with literary themes using information from the literary work refreshed with each retrieval and stored more securely in long term memory. While studying Shakespeare's Hamlet, they refer to character relationship maps and plot summaries for every discussion rather than internalizing the fundamental conflicts and dramatic structures represented in the play.
During discussions, they're constantly checking their device for ideas rather than developing the mental schema that would allow them to make connections across texts and identify recurring literary patterns. When the closed-book exam arrives, they struggle to perform critical analysis because they've never developed the necessary stored mental frameworks that would have emerged through the effort of engagement and retrieval practice.
Why inappropriate: This offloading replaces the knowledge construction that enables literary analysis and appreciation. Literary criticism requires internalized knowledge of textual elements to generate meaningful interpretations and cross-text connections. The cognitive effort of storing core literary elements in organized memory is itself part of developing analytical skills. By delegating this work, the student impairs their ability to think critically about the work during unsupported assessment conditions and beyond in authentic sociocultural participation.
Advanced Historiography Research
A high school student conducting research on shifting interpretations of the Civil Rights Movement maintains a detailed digital database of historians' arguments, methodologies, and key source documents. Rather than attempting to memorize which historian made which argument (information that could fill volumes), they organize this information in a way that allows them to trace the evolution of historical interpretation over time. This external memory system allows them to focus on identifying patterns, analyzing historiographical shifts, and developing their own informed stance on historical debates—higher-order cognitive tasks that require deep engagement with the material.
Why appropriate: The volume and complexity of historiographical information exceeds reasonable memory capacity. By offloading the storage of specific arguments and citations, the student redirects cognitive resources toward analysis of patterns and methodological differences between historical interpretations. The external system enhances rather than replaces the student's analytical capacity, allowing them to engage with more complex historical thinking.
Getting Even More Specific
This section presents examples of cogitative offloading, again with an open invitation to help me revise and extend my thinking.
Molecular Biology Research Project
A high school student wants to explore how stress affects the way proteins fold in living cells, sort of like investigating how heat might change how an origami structure takes shape, but at a microscopic level. The problem is that analyzing these folding patterns is incredibly complex. It would be like trying to map thousands of origami folds changing in real-time.
Instead of suggesting abandoning this fascinating question, the teacher recommends the student use AI modeling software as a thinking partner. The student collects basic information about the proteins they're studying from public scientific databases and feeds this information into the software. The AI then creates interactive 3D models showing how these protein origami structures might change under different conditions.
When the models reveal unexpected folding patterns—perhaps a protein curling in an unusual way when exposed to heat—the student wonders, "What might this shape change mean for how the cell functions?" and "Could this be how organisms adapt to environmental stress?" Drawing on their basic biology knowledge, they develop several possible hypotheses:
Hypothesis 1: Maybe this new folding pattern exposes a previously hidden binding site that allows the protein to interact with other molecules only during stress conditions. If that's true, I should be able to see new protein interactions in heat-stressed cells that don't happen in normal cells.
Hypothesis 2: Perhaps this folding change makes the protein more stable in high temperatures, preventing it from breaking down. If so, cells with more of this protein should survive heat stress better than cells with less of it.
The AI handles the mathematical heavy lifting that would have made this exploration impossible, while the student does the genuinely scientific work of observing patterns, asking good questions, and designing clever tests of their ideas. Rather than limiting creativity, the technology opens doors to questions the student could never have explored otherwise, allowing them to think like a scientist at the edge of what's known. The example also illustrates how cognitive offloading can democratize access to advanced scientific inquiry, allowing students to engage with complex problems that would traditionally require graduate-level expertise or specialized equipment.
Medieval Literature Translation
An AP Literature student approaches Chaucer's Canterbury Tales and encounters the barrier of Middle English—a version of English from the 14th century that appears almost as a foreign language to modern readers. Rather than spending weeks learning Middle English grammar and vocabulary, they engage in thoughtful cogitative offloading using specialized digital tools under the guidance of the teacher.
The student uses CantApp, a resource developed by academics at UCL, the University of Saskatchewan, and The National Library of Wales. I can’t vouch for all five of the bullets below (I need more time to experiment with it), but I have used this app and could easily integrate it into my own English classroom, recommending that students use an exploratory LLM to complement it. Unlike traditional glossaries or footnotes, CantApp offers:
Audio performances of the text in Middle English, allowing students to hear the original pronunciation as Chaucer might have performed it
Toggle functionality to view modern English translations line-by-line directly beneath the Middle English text
Integrated vocabulary explanations of Middle English words within the reading interface
Scholarly notes and commentary accessible while reading
Access to digitized versions of original manuscripts like the Hengwrt manuscript
This access creates a multisensory, multimodal learning experience that traditional translation aids cannot provide. Traditional glossaries interject extraneous cognitive load by requiring constant back-and-forth consultation, providing limited contextual information, focusing primarily on specialized terminology, and failing to capture the auditory dimension of Middle English. Similarly, footnotes disrupt reading flow, often marking the reader as an "outsider" to the text, and provide isolated fragments of information in cryptic form.
As Shelly Bryant notes about footnotes: "The text is not being allowed to flow naturally through the landscape of the reader's mind, but is instead being confined to a narrow canal that has been dug by the translator/annotator."
By offloading the preliminary cogitative process, the student directs their cognitive resources toward the core learning objectives—analyzing narrative techniques, cultural contexts, and thematic elements. They gain a more authentic experience of the text as it was meant to be performed, can toggle between Middle English and modern translations to maintain connection to the original while ensuring comprehension, and can access scholarly commentary that enriches their analysis rather than just enabling basic understanding.
This approach recognizes that becoming fluent in Middle English isn't the primary educational goal. Although using the device for a sustained period would likely decrease the pain and increase the inclination to learn incidentally Middle English phonological coding, which I was required to learn by rote in a graduate Chaucer course. Instead, digital tools allow deep engagement with the literary richness of Chaucer's work while handling the initial comprehension hurdles of a language that has evolved significantly over 600 years.
Collaborative Poetry Explication
Students working on explicating complex modernist poetry in small groups can use AI to identify obscure allusions, untangle syntactical structures, and provide historical context for references which no one else in the group can solve. The assignment could require students to discuss and raise specific questions about conundrums before accessing the tool and to keep a log of their interaction with the LLM. The relative facility with background knowledge allows them to move more intentionally to the interpretive work of understanding how these elements contribute to theme, tone, and meaning. The class discussion focuses on developing personal responses and critical interpretations rather than being slowed by preliminary decoding and deciphering work—unless learning strategies for doing this work manually is the objective. The cogitative offloading enhances rather than replaces the core literary analysis skills.
Take Aways
The distinction between cognitive and cogitative offloading offers a framework for refining our evolving relationship with technology. While cognitive offloading addresses epistemic realities, the storage and retrieval of information, cogitative offloading concerns the metacognitive delegation of active thinking processes themselves. This distinction becomes increasingly relevant as AI systems take on more complex cognitive tasks.
It invites educators, researchers, and technology designers to consider more carefully which aspects of thinking we choose to externalize and which we should cultivate internally. Rather than asking simply whether technology is good"or bad for thinking, we might instead focus on designing educational experiences that strategically leverage appropriate forms of offloading while preserving and developing essential internal cognitive capacities.
Future research might empirically test whether these two forms of offloading have different effects on learning outcomes, cognitive development, and metacognitive awareness. At this moment, I see potential for exploration which—even it I’m wrong—could surface different insights. As we navigate an increasingly AI-augmented world, developing more precise language about human-technology thought entanglements becomes not just academically interesting but practically essential for shaping educational and technological futures that enhance rather than diminish human potential.
In the development of this essay, I first used Perplexity to conduct comprehensive web searches for examples of "cogitative offloading." The closest relevant concept I encountered was "computational offloading," a term that warrants further exploration in future work. I authored the majority of the essay's first half independently, occasionally consulting Perplexity to verify accuracy and test the plausibility of my arguments—a process I found particularly valuable for preventing conceptual missteps. While I maintain full responsibility for all claims and positions advanced in this essay, I encountered challenges when attempting to generate illustrative examples. To address this limitation, I engaged in a dialogue with Claude regarding the distinctions between cognitive and cogitative offloading. Claude generated initial examples, which I subsequently submitted individually to Perplexity for accuracy verification and plausibility assessment against original sources before revision to make the text more reader friendly (I hope I didn’t distort nuances) and incorporation into the final text. This portion of the work involved crafting the language used to explain these discipline-specific examples in such a manner that a non-disciplinarian could get the gist and was the most time consuming part of the writing. I had to understand and judge the examples myself before I could use them. I discarded several for a variety of issues and sculpted those I used with loving care in hopes that they clarify my meaning.
What a rich comment—thank you! The GPT Zero example is incredible. What a time-saver for teachers. I imagine GPT Zero could filter out the tests that drive inappropriate cognitive offloading, decompose the assumptions of the test, lift those assumptions to a more sophisticated level, and design better tasks for both instruction and assessment.
Regarding the research on AI and critical thinking, let me pull together some studies for you. Several recent works have examined this relationship:
The most comprehensive is Gerlich's 2025 study (with 666 participants across diverse backgrounds), which found "a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading." Younger participants showed higher AI dependence and lower critical thinking scores.
The Microsoft study I referenced (January 2025) surveyed 319 knowledge workers, revealing that "higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking."
Dr. Ahmed's engineering research found that among 285 students, "68.9% of laziness and 27.7% of the loss of decision-making skills were the result of artificial intelligence use."
Interestingly, a 2023 study on language learners suggests that students with previous AI experience actually demonstrated higher critical thinking in AI-supported environments—pointing to the importance of metacognitive training.
As Gerlich notes, "AI isn't inherently bad for our cognitive abilities—like any tool, it needs to be used correctly."
Key Studies:
- Gerlich, M. (2025). "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking" - https://www.mdpi.com/2075-4698/15/1/6
- Microsoft Research (2025). "The Impact of Generative AI on Critical Thinking" - https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/
- "AI tools may weaken critical thinking skills by encouraging cognitive offloading" (PsyPost) - https://www.psypost.org/ai-tools-may-weaken-critical-thinking-skills-by-encouraging-cognitive-offloading-study-suggests/
- "Is AI eroding our critical thinking?" (Big Think) - https://bigthink.com/thinking/artificial-intelligence-critical-thinking/
So in my terms
Data dump
Data conclusions
Garbage in?
Yes? No? Iffy?
Sift and edit.
I responded to a person who very much likes her AI because it enhances and amplifies her feelings.
I suggested a test of asking AI for its opinion of something that she really opposes to see if the response approves or disapproves of her faux belief.
Does this make sense?
Can one test AI this way?