Before entering into the complexities of how badly we as a culture misunderstand learning, it is essential to foreground the source of the argument wherein I first learned it, in Frank Smith’s powerful book, Insult to Intelligence: The Bureaucratic Invasion of Our Classrooms (1988). Smith’s central thesis is a cogent critique of the prevailing “drill and kill” approach to education, which hasn’t changed in the intervening years—a regime of constant drilling, testing, and grading that, he argued, tragically misunderstands how children and adults actually learn.
Drawing on decades of research and classroom experience, Smith contends that these bureaucratic practices insult both the intelligence of students and the professionalism of teachers, reducing learning to fragmented, decontextualized tasks that bear little resemblance to genuine intellectual growth.
He insisted that true learning is natural, purposeful, and deeply contextual, occurring not through isolated piecemeal assignments but through meaningful experiences, curiosity, and trust in the learner’s innate capabilities. Smith’s manifesto is a call to resist the on-going reduction of education to mechanical processes and to reclaim a vision of schooling that honors the complexity and potential of every learner. His call is as vital today as it was in 1988.
The following essay builds upon Smith’s foundational critique in the context of Luciano Floridi’s insights into the Infosphere, exploring how our obsession with measurement and the misapplication of continuous systems thinking have led us further astray in understanding the realities of human learning.
The Invisible Fault Line: How Humans Misunderstand Learning
In a world obsessed with measurement, we've made a critical error. While engineers can predict exactly how a forty-ton truck will stress a steel bridge down to the millimeter, our most expensive educational metrics fail to capture the moment a child suddenly grasps how to read. This is a fundamental category mistake. For centuries we've been applying the mathematics of continuous systems to the inherently discrete nature of human cognition, intelligence, and learning, and the consequences ripple through every classroom in America. As we enter what philosopher Luciano Floridi calls the "hyperhistorical" age—where information technologies are no longer just tools but conditions for human flourishing—this misalignment between how we measure learning and how learning actually happens has never been more consequential.
History Isn’t History
The function of history in this philosophy changes over time. Simply by a bit of derivational suffix magic, Floridi enables us to write sentences like “Galileo meets historically with Leonardo to have a cup of wine and to discuss light and shadow.” How does one meet “historically”? Floridi writes in this book:
"Prehistory and history work like adverbs: they tell us how people live, not when or where they live. From this perspective, human societies currently stretch across three ages, as ways of living."
He then defines these adverbs as a) Prehistory: a) "No ICTs (Information and Communication Technologies)," or societies living "prehistorically, without recorded documents"; b) History as in "Individual and social well-being related to ICTs" or societies that "rely on ICTs to record, transmit, and use data of all kinds. In such historical societies, ICTs have not yet overtaken other technologies, especially energy-related ones, in terms of their vital importance"; and c) Hyperhistory: where "Individual and social well-being [is] dependent on ICTs" - societies and environments " in which ICTs and their data-processing capabilities are essential conditions for the maintenance and any further development of societal welfare, personal well-being, and overall flourishing."
Perhaps the most significant insight I’ve derived for language and literacy pedagogy is entangled with this view of history divided into epochs based on the presence or absence of digital technology. Accordingly, the period we refer to as “history” began with the invention of ICTs including writing and reading. This period marks the emergence of formal schooling as well. Prehistory lasted for millennia and is lost to us except for “texts” written in bone fragments, ceramics, and fossils, history lasted for centuries and, though not lost, is contested, but hyperhistory began recently and has left most of us gasping for breath.
How do we evaluate objects snd experiences in hyperhistory wherein we drink wine but speculate in the wine market? Floridi uses wine tasting to illustrate how different purposes require different methods. The question “How is the wine to taste?” is answerable by our primal self. Tasting binds the evaluator to a primal location in space and time with a wine glass. “What winery produced it?” is a question at a higher level of abstraction that seems to depend on living in history where such answers to such questions have been documented.
Floridi would tell us to identify the system or entity we want to understand prehistorically, historically, or hyper historically. . The "tasting LoA" observables beyond taste include nose (bouquet), legs (viscosity), robe (peripheral color), clarity, sweetness, acidity, fruit, tannicity, and length, each with determined types that create meaningful distinctions for evaluating wine quality. Although these judgments depend upon sensory perceptions, Such conceptual frameworks begin to appear in history. Information technology intercedes between the lips that drink and the tongue that talks.
For the "purchasing LoA," relevant observables shift to maker, region, vintage, supplier, quantity, and price, many of which have quantitative answers . Hyperhistorically, the question of which wine to buy in large quantities could likely be made more carefully using AI as an information source. The "cellaring LoA" focuses on drinking window, serving temperature, decanting time, alcohol level, food pairings, and quantity remaining in storage, a type of analysis perhaps best answered by experts trained in the documents of wining and dining. Each LoA serves different purposes and makes different questions answerable, but none attempts to force continuous mathematical relationships onto what are discrete categorical distinctions.
Discrete vs Continuous Systems
When a forty-ton truck rumbles onto a steel bridge spanning a river, something continuous happens beneath its wheels. Because the force of gravity is continuous, because the material substance of the truck is stable, because the massive beam resisting the downward force of gravity has been calibrated to keep the bridge aloft, imperceptibly to the naked eye, but measurable, nonetheless, the truck's weight creates a concentrated force pressing down at one specific point, while the beam's own weight adds pressure along its entire length. As the load of the truck moves forward predictably, the steel responds with mathematical precision.
Engineers have spent centuries learning to predict exactly what happens to the steel in this moment. The interest of the engineer is in the steel, not the truck. The problem is the steel. Engineers understand that when you double the weight of the truck, you roughly double the stress on the beam. Move the truck a few feet forward, and the stress pattern shifts proportionally. Push the beam twice as hard, and it bends approximately twice as far. This predictable relationship between cause and effect—small changes in load producing correspondingly small changes in response—makes it possible to use mathematical tools. i.e., calculus, to analyze the beam’s behavior.
Calculus can handle smooth, continuous change, the kind that happens in the natural world. The stress doesn't jump suddenly from one value to another as the truck moves across the bridge; instead, it flows gradually and predictably from point to point along the beam's length. Engineers can slice the beam into infinitely thin cross-sections, analyze each tiny piece separately, then add up all those pieces to understand how the entire structure behaves. They can calculate exactly where the maximum stress occurs, predict how far the beam will bend, and determine whether the beam will reliably support the bridge given predetermined load limits.
This mathematical precision allows engineers to design bridges with confidence. They can optimize the amount of steel needed, ensure adequate safety margins, and predict how the structure will perform under various conditions. The underlying physics operates through smooth, continuous relationships that make such mathematical analysis both possible and powerful.
Yet when we turn from the bridge to the classroom, from steel beams (continuous entity) to struggling readers (discrete entity), these same mathematical tools suddenly lose their power and relevance. An early reader does not emerge a little bit each day, each week. A struggling reader does not gradually transition from ill-performed literacy to successful literacy in the smooth, predictable way that stress gradually increases along a loaded beam. Instead, reading development occurs through distinct, often sudden transitions between qualitatively different states of understanding.
There is no differential equation that describes the rate at which phonemic awareness develops, because phonemic awareness represents a categorical shift in how children perceive language sounds rather than a gradually increasing quantity that gets better with drill and practice. In other words, a child who is phonemically aware exists in a qualitatively different state than the child who is not. This difference is not quantitative.
These are discrete, binary variables, on or off. They may vary in observable frequencies in behavior, it may take a while to get them all sorted out, but once on, they are always on. The self-teaching mechanism referred to by several researchers in early reading gains in sophistication at a more abstract system level with phonemic awareness. The questions we ask about discrete variables, once on, can range from how, when, why, and under what conditions the behavior becomes habitual and automatic.
Why Conflating Continuous and Discrete Systems Matters in Schooling
When educational systems treat learning as a continuous variable—something that can be measured, graphed, and predicted like the stress on a bridge beam—they fundamentally misunderstand the nature of cognitive development. This conflation creates systematic distortions that ripple through every aspect of schooling from curriculum design to assessment practices to teacher preparation.
Consider what happens when we apply continuous measurement thinking to Helen, our six-year-old Reading Recovery student. If learning were truly continuous, we would expect her "reading improvements" to increase predictably over time like water rising in a graduated cylinder. We might expect that doubling her instructional time (increasing the dosage) would roughly double her progress. We would design interventions based on the assumption that small instructional inputs produce proportionally small learning outputs.
But Helen doesn't learn this way. For weeks, she may show little measurable progress on conventional assessments while her cognitive system reorganizes around new understandings of how print works. Then, seemingly overnight, she demonstrates mastery of phonemic segmentation—a capability that was entirely absent the day before. This breakthrough is not a gradual accumulation of "phonemic awareness points" but a qualitative shift in how her mind processes spoken language. The transition resembles a phase change in physics—water becoming ice—rather than a temperature rising steadily on a thermometer.
Educational measurement systems designed for continuous variables simply miss these breakthroughs. When we force Helen’s reading development onto a continuous scale—assigning her reading levels like 2.3, 2.7, 3.1—we create the illusion of smooth progress while ignoring the actual learning events that matter. The numbers suggest incremental improvement when the reality involves categorical shifts between qualitatively different ways of processing text.
This measurement distortion leads to intervention strategies that target the wrong level of abstraction. If Helen scores low on a "phonemic awareness test," continuous thinking suggests providing more phonemic awareness instruction until her scores improve. But if her primary difficulty stems from problems using syntax as a resource for reading—a strategic processing issue linked to prosody, which is also an auditory substrate of reading—then phonemic awareness drilling not only wastes time but may interfere with the discrete cognitive reorganization she needs to accomplish.
The conflation becomes particularly damaging when scaled up to school and district levels—much higher levels of abstraction which depend upon actionable information from lower system levels of abstraction. Continuous measurement approaches compress individual students' discrete learning processes into mean scores, standard deviations, and percentile ranks that administrators can track over time. It’s as if administrators want to order massive amounts of wine relying on consumer ratings while ignoring expert wine tasters. These statistical manipulations suggest that learning progresses uniformly across populations and that systemic interventions should produce predictable improvements in measured outcomes. The problem is some outcomes are discrete, not continuous.
The tragedy compounds when teachers, trained to think in terms of continuous progress monitoring, lose sight of the discrete learning events they are trying to facilitate. Instead of observing whether Helen has developed the ability to notice when her reading stops making sense—a discrete, typed capability—they track her progress on continuous scales that obscure the very phenomena they need to understand and support.
This systematic conflation of continuous and discrete variables doesn't simply create measurement problems which we chalk up to error variance; it shapes educational culture in pernicious ways that make learning harder to achieve. When everyone from policymakers to parents expects learning to follow predictable growth curves, the messy, non-linear, threshold-driven reality of cognitive development becomes invisible. Discrete breakthroughs that characterize genuine learning get lost in the noise of continuous data systems designed to detect number of problems of a certain type missed, memory for templates to comprehend charts and graphs, ability to match one of four options in answer to a question about a phrase that exists in the test passage. It becomes challenging to make any connection between what is recorded and what has not been learned, the grail of good teaching. Even picking the right answer is no guarantee of learning.
What To Do?
Knowledge that learning cannot be easily measured, though numbers can be useful, is especially valuable now as teachers work to understand the impact LLMs will have on literacy practices. Using LLMs in any school context where students are expected to actually do the assignments they are asked to do will require sets of discrete variables as well as continuous variables designed in a rational architecture preventing category errors that end up defaulting to the lowest level of abstraction. The logical extension pedagogically is blind remediation (blind in that nobody knows what the student has learned, just that the student probably has not learned something or other), which is precisely where learners learn the least for the longest time period if we can trust the data from the history of community college remedial programs.
Education pays attention to what it measures. Measuring continuous variables will always be important—and useful. We know, for example, that students who approach the writing of prompts for LLMs with purposeful strategies grounded in prior knowledge activation and an organizing learning intention are more likely to have a successful interaction, an observable that can be both self-assessed and measured in combination. As a continuous indicator of pre-prompt preparation, students could document the number of minutes of pre-prompt activity and their subjective experience.
This paradigm shift represents part of the substrate teachers are thinking about when they comment on shifting from product to process. I’m not naïve, however. From the early 20th century forward, human learning has been modeled as a continuous variable. The intent is to measure individual students to wash out error variance statistically and arrive at mean performance scores of groups. When we rely on tools that privilege continuous variables, we overlay assumptions of the normal distribution on our learners, we design measurement tools to spread out our students in ranks, and we ignore breakthroughs.
Lessons from History
In 1983, the opening lines of A Nation at Risk report gained a lot of press. "Our nation is at risk... the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people." Schools had gotten themselves into a pickle where no viable move remained within existing educational practices. Leaders would be unable to increase student achievement or maintain international competitiveness without radical reform. But was the concern that students weren’t learning? Not really. The report argued learning was so awful because educational standards, those continuous distributions that let us rank one another competitively, had deteriorated. This mediocrity was a measurement problem.
The answer, obviously, lay in new standards. Raise the standards, and they shall learn. Test them using better tests, and they shall learn.
The slogan "No Child Left Behind," which accompanied President George W. Bush's 2001 education legislation, became infamous for its unintended consequences. While noble in aspiration, one could charitably argue, this statement heralded an era of high-stakes continuous-variable testing and accountability measures that changed nothing, except for its empirically evidenced demoralizing effects on teachers. The emphasis on test scores created a stalemate where innovative teaching became increasingly difficult as schools focused narrowly on theoretically indefensible measurements of learning proxies rather than the average number of epiphanies per day.
The metaphor of stalemate crystallizes the condition of schools for the past fifty years at least—a deadlock where neither side can advance, the technocrats nor the humanists—because schools have avoided the work of figuring out which levels of abstraction are understood by continuous measurement and which require discrete qualitative, contextualized data assessed by an embedded expert. This isn’t atmospheric work. It is infospheric work.
What would happen if we woke up tomorrow and we had to justify our use of measurement tools in schools? How would you do it? I’d love to hear your thoughts.
Reading List
1. Floridi, L. (2003). On the intrinsic value of information objects and the infosphere. Ethics and Information Technology, 4(4), 287–304. URL: https://link.springer.com/chapter/10.1007/978-3-319-04093-6_12
2. Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329. URL: https://philarchive.org/rec/FLOTMO
3. Floridi, L. (2010). Information: A Very Short Introduction. Oxford: Oxford University Press. URL: https://www.amazon.com/Information-Very-Short-Introduction-Introductions/dp/0199232393
4. Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford: Oxford University Press. URL: https://global.oup.com/academic/product/the-fourth-revolution-9780199606726
5. Floridi, L. (2015). The Onlife Manifesto: Being Human in a Hyperconnected Era. Springer. URL: https://link.springer.com/chapter/10.1007/978-3-319-04093-6_12
6. Floridi, L. (2019). The Logic of Information: A Theory of Philosophy as Conceptual Design. Oxford University Press. URL: https://ndpr.nd.edu/reviews/the-logic-of-information-a-theory-of-philosophy-as-conceptual-design/