Inventing the Wheel
We've been outsourcing our thinking since the dawn of humanity. From the first stone tools to written language, humans have consistently found ways to distribute cognitive load beyond the confines of our individual minds. But something unusual is happening now.
When a four-year-old child faces a challenging mental rotation task, they instinctively look for ways to offload that cognitive burden. When the task gets harder, even young children strategically turn to external tools. We are, it seems, natural-born offloaders.
This innate tendency toward cognitive delegation has served humanity well throughout our evolution. Yet today, as artificial intelligence systems increasingly handle not just information storage but information processing itself, we face unprecedented questions about the future of human thought.
What happens when our cognitive partners don't just remember for us, but think for us? And what does this even mean? How might this reshape not just what we can accomplish, but how we develop as thinking beings? This exploration examines the surprising science of cognitive offloading and what it means for human intelligence in the age of AI.
I am because I offload
While children as young as four can engage in effective cognitive offloading strategies, their metacognitive awareness of when and how to optimally offload is still developing, raising important questions about AI's impact on cognitive development. In studies where children aged 4-11 were given opportunities to manually rotate a turntable in a certain manner, to eliminate the internal demands of mental rotation, even the youngest children showed a linear relationship between mental rotation demand and likelihood of using the external strategy. When things get tough, the tough offload.
During problem-solving events among four-year-olds, the offload outcome is contingent upon the circumstances. The decision to offload depends on how challenging the problem is for the child, how accessible the offload device, and how success is defined. If success means getting the right answer as expediently as possible, the value of offloading is high. If success means spinning the device and trying to predict where it will stop as soon as possible after it starts, it becomes more challenging to conceive of a way to offload.
To Cognize or Not To Cognize
I’ve long been intrigued at the wide variability in how people use the word ‘cognition.’ For example, I’m not sure that everyone would agree with the idea that predicting where a roulette wheel will stop engages cognition as much as imagination. In my mind, imagination is intertwined with cognition in a tension which reverses polarity given the frame of reference.
Just as a Labrador retriever must understand the laws of physics to calculate when it should make the leap to catch the frisbee, the child must measure the speed of the spinning body and the rate of slowing in order to judge the likely stopping point. There is an obvious athletic, aesthetic component, it seems to me.
Upping the ante, it's clear that the offloading of human cognition, however we define it, is the ultimate explanation for human progress. How else did we get to the Moon? From the earliest stone tools to writing systems to computational theory that externalized memory, humans have incessantly developed technologies to distribute cognitive load beyond our individual minds.
This process of cognitive offloading has accelerated dramatically in the digital age with computational tools handling increasingly complex tasks that once simply required dedicated mental effort. Computers after all were once actually human, hired out as accountants.
What makes this evolution particularly fascinating is how it reshapes not just what we can accomplish, but how we think our way to accomplishment. As Bruner might suggest, these external scaffolds don’t supplement cognition; they transform it, creating new possibilities for thought while potentially atrophying others.
The question facing us now, especially as AI systems become more sophisticated, is whether we're on the road to a qualitative shift where the boundary between internal and external cognition becomes not just blurred but perhaps irrelevant to understanding human intellectual development.
Those Who Forget the Past…
Sometimes we forget that cognitive offloading was a thing long before AI. If memory serves [searching…searching] the original theory emerged well before the 21st century and has held up as a conceptual model robust enough to support interdisciplinary research. In fact, we are offloading cognitive work as we read (and write) this essay. We are using the text as a container for meaning to download and upload. We shuttle through all of its entanglements and paradoxes and get on with the task at hand, knowing we can revisit any part of it or reinterpret according to our own lights. We need a theory that unifies analogical offloading and digital offloading.
Gilbert's (2024) approach to offloading unifies diverse phenomena under a single conceptual framework: offloading behaviors result from rational cost-benefit analyses between internal and external memory—and they always have. When we store information in our brain's memory systems, we face the same inherent limitations our progenitors faced. Our brains have finite capacity, i.e., we can only hold so much information in working memory at once.
This limited capacity creates what economists would call an "opportunity cost." By using some of our memory capacity for one piece of information, we lose the opportunity to use that same capacity for something else. This is why we take notes or record lectures or both. Of course, taking notes is a more effortful kind of offloading than audio recording, but the affordances are very different. Lots to chew on there.
This model explains why using internal memory feels effortful while external memory aids (like notes or pre-AI Google) feel less demanding. Traditional offloading methods still require cognitive work. Looking up information in a book during an open-book test and adapting it to answer questions demands considerable mental effort. What makes AI different is its ability to not only retrieve information but also tailor explanations to specific contexts.
With AI, the cognitive burden shifts dramatically; users no longer need to struggle with recall or with applying retrieved information to new situations. AI stores and retrieves data, and it processes and contextualizes it, removing multiple cognitive steps that traditional external memory aids still require us to perform.
This model also advances our understanding beyond descriptive accounts to provide a computational explanation for when and why people offload cognitive demands to external tools—increasingly important in our technology-saturated environment where digital offloading options continue to multiply. Metacognition plays a central role in this framework, appearing to function as what some researchers call a "reliability thermostat" that adjusts cognitive effort to maintain desired levels of performance.
Educational Implications in the Age of AI
The educational implications of these findings are profound, a word we use too much in educational writing that actually applies in this case. For many, critical thinking is at stake. I must disclose my judgment that the phrase ‘critical thinking’ is an even more opaque term than the term ‘cognition.’ Consider this. ‘Critical thinking’ is laminated in education history in multiple layers ranging from Aristotelian logic to theological hermeneutics to Bloom’s Taxonomy. ‘Critical cognition’ isn’t. Yet we often conflate ‘cognition’ and ‘thinking.’ Research looking at AI and pedagogy would do well to address this theoretical mixed messaging.
For example, recent research tentatively reveals a significant negative correlation between frequent AI tool usage and critical thinking abilities, suggesting that unchecked cognitive offloading may undermine the very skills education seeks to develop. However, there is no shared understanding off what critical thinking is as a scientific construct. As we integrate AI into learning environments, we face a delicate balancing act: leveraging technology's multifaceted benefits—offloading is but one benefit of AI—while preserving growth in cognitive development., including critical cognition, which I take to mean resisting and interrogating ideas before accepting them.
Developmental research offers some guidance here. While children demonstrate offloading behaviors from an early age, their ability to selectively and appropriately offload in accordance with task difficult evolves with metacognitive development. Studies show that only children aged 9 and older consistently offload in accordance with task difficulty, suggesting that metacognitive skills must be deliberately cultivated alongside technological literacy. Focusing the risks of AI on diminishing metacognitive capability seems higher priority than fears of damages to an amorphous ‘critical thinking.’
What might an educational approach that honors this balance look like? We need pedagogical models that teach metacognitive strategies alongside technological skills. Students need to learn how to use AI tools along with when and why to use them,—developing what we might call "a pedagogy of researched-based offloading decision-making."
The Paradox of Progress
There's a certain irony in our current situation. As we develop increasingly sophisticated ways to offload cognition, we seem to be circling back to fundamental questions about what cognition itself means.
Is cognition something that happens exclusively within our skulls? Is cognition a behavior we can turn on and off when we are awake? Or is it better understood as a distributed process spanning neural networks, bodies, and technologies? Have we forgotten Edwin Hutchin’s study of distributed (and offloaded) cognition on an aircraft carrier (cf: the book Cognition in the Wild)? Without offloading there could be no aircraft carriers.
The extended mind thesis, first proposed by philosophers Andy Clark and David Chalmers in 1998, anticipated this blurring of boundaries. They argued that external objects can function as part of the mind when they play the right kind of role. What makes today's AI systems different from earlier technologies is that they don't just store or calculate and retrieve information—they engage in processes that look increasingly like thinking itself. Offloading in this sense isn’t quite the same construct in the AI era. Calculating with numbers isn’t the same as computing with language.
This brings us to a provocative question: Are we becoming more or less intelligent as we offload more cognition? The answer may depend on how we define intelligence. If intelligence is the capacity to solve problems and achieve predefined goals, the perspective of the four-year-old, then effective offloading certainly can enhance it. But if intelligence is the capacity to perform mental operations without external assistance, a type of intelligence that is out of step with human biological evolution, then offloading might represent a kind of cognitive atrophy. Lots to chew on here as well.
Where Do We Go From Here?
As we navigate this new cognitive space, several principles might guide us. First, we should recognize that cognitive offloading is neither inherently good nor bad—its value depends on context and purpose. Sometimes struggling through a problem without assistance builds valuable neural pathways; other times, offloading routine calculations frees mental resources for Olympic-level thinking.
Second, we should prioritize developing metacognitive skills that help people make wise decisions about when and how to offload to any tool, but particularly to AI. This includes the ability to accurately assess one's own capabilities, to understand the affordances of different technologies, and to evaluate when the benefits of offloading outweigh the costs.
Offloading conversational practice sessions for the purpose of learning another language—letting the bot speak for you so you can get credit for the assignment, for example—is absurd, but using AI as a practice tool in appropriate ways as a conversational partner in another language can speed up acquisition, according to studies I’ve been reading in the field of EFL.
Finally, we might reconsider how we evaluate and measure cognitive abilities in a world of ubiquitous offloading technologies. Traditional assessments that prohibit external aids may become increasingly disconnected from real-world cognitive practices. Over time they could become functionally archaic. Instead, we might develop assessments that focus on students' ability to strategically leverage both internal and external resources to solve complex problems.
I’ve been toying around with the acronym “B.R.a.i.N.” I can’t believe it is original with me, but if it is, I’ll take it. BR is for “brain,” "N is for “nexus.” Studying the brain in teaching and learning events and studying AI in academic activities separate the two constructs. Studying the “network” of brain+AI is a third combined construct which I predict will generate amazing research in the coming years of great value to educators.
The future of human cognition is going to follow from its past, which is filled with a steady growth of tools to help with cognitive offloading. This future is going to be neither purely internal cognition nor purely external natural language computation, but a complex interplay between brains and neural models.
By understanding this relationship through frameworks like Gilbert's value-based decision model, we can figure this out. We could invent the wheel; so we can manage this. The work of human thought and the potential of our increasingly sophisticated machines combined and carried out in public under solid metacognitive maturity without stigma nor shame could help bring this offloading predicament from the shadows of the modern cave into the light of day and benefit the children of the future in their classrooms.
Excellent piece! Can you suggest any of the recent research you mention that shows the negative correlation between AI usage and critical thinking? I find that checks out intuitively and would be interested to read more. I find that one of the most interesting and impactful places for considering the impact of "unchecked cognitive offloading" is in the workplace. There seems to be a growing mainstream line that goes that people should upskill to have those abilities that AI can't replace. On the surface that makes sense, but it does call into question the metacognitive skills and preparedness of how people who are already classified today as "educated". Also, I worked with assessments for many years so what you say on that point is resonant for me. Arguably, the top skill for success on assessments has long been a kind of suspension of disbelief that allows one to gear up to perform on the assessment. A company call GPT Zero did some work on checking AI usage across entire courses or departments in a university setting with the idea being not to spot AI usage as cheating but to encourage evaluation of those assessments that seem to point students most toward AI tools.