When Langue Gets Paroled: The Chomsky Problem Revisited
“On the one hand, semantic theorizing depends (to some extent) on the intuitions of native speakers to identify what is meaningful, and what is anomalous or otherwise flawed. On the other hand, we now have researchers stating that such intuitions are unreliable because they ascribe meaning to text, i.e. the output of an LLM, where there is none. I'd love to see this tension surfaced and addressed.”
Louise Vigeant, PhD
In 1957, linguist Noam Chomsky created a now-famous sentence to make a point about human language: "Colorless green ideas sleep furiously." The sentence follows perfect English grammar—subject, adjective, noun, verb, adverb—yet it's complete nonsense. Chomsky used this example to demonstrate that our brains distinguish between two separate things: whether something sounds right grammatically, and whether it actually means something. The gold standard for linguistic meaning is the human receiver’s judgment.
Chomsky’s nonsense sentence foreshadows the problem with student essays that could be well-formed but hollow, and the time has come to look into what we mean by “hollow.”
Putnam’s Ant in the Sand
In 1981 philosopher Hilary Putnam broadened the scope of the conundrum by referencing not linguistic symbols, but icons which share their graphic nature with letters and words and also trigger meaning in human minds. “An ant is crawling on a patch of sand,” Putnam wrote. “As it crawls, it traces a line in the sand.
By pure chance the line recrosses itself [and] ends up looking like a recognizable caricature of Winston Churchill.” Whereas colorless green ideas are meaningless, Churchill has the weight of history on his back. Here’s Putnam’s question: “Has the ant traced a picture of Winston Churchill, a picture that depicts Churchill?” Or has the ant given us just another random, hollow shadow of meaning? In education, this reminds us that not every resemblance to meaning deserves to be treated as meaningful.
Searle and Intentionality
In 1980 philosopher linguist John Searles took a different tack. In his overarching argument against the possibility of human’s ever developing what he called “strong A',” Searle proposed that human intentionality cannot be duplicated by a computer (essentially, Luciano Floridi’s argument that agency isn’t equivalent to intentionality): according to Searle, “Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain.”
The machine would have to pass Chomsky’s test in that colorless green ideas can’t be said to reveal intentionality. There is no causal power that would propel a human to utter such a sentence. Like Putnam, Searle is concerned about misjudging an ant’s squiggles in the sand as a meaningful work of depiction.
Turing and the Imitation Game: Who Lies Better?
In 1950, Alan Turing’s seminal paper arguing that whether machines think like humans, responding to and making meaning, can be judged through his “Imitation Game” wherein, hidden from an interrogator, a man and a woman answer questions like “How long is your hair?”
Each hidden figure intends to deceive the questioner by purposely fabricating (hollowing out real meaning), e.g., the man describes long flowing hair and the woman, a crew cut (keep in mind this was 1950). Humans are skilled at intentional fabrication, as we know. The “meaning” of an intentional lie is a complex tangle of pragmatics.
Then, we replace one of the hidden figures with a machine. If the machine successfully deceives the interrogator more times than the human figure it replaced, it has “passed” as human. Using human intuition as well as human gullibility as the standard, Turing turned the tables on Putnam’s ant that unintentionally drew Churchill and on Searle’s stand against strong AI citing unintentionality as the hallmark of artificial intelligence. Turing’s conclusion hinges on what we mean by ‘machine’ and by ‘human’; just because a machine is not a human, Turing wrote, that doesn’t mean a machine cannot think:
“I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’ The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd.”
For decades, this distinction seemed rock-solid. We assumed people would be able to reliably tell the difference between meaningful language and elegant gibberish. But the rise of artificial intelligence—specifically, large language models like ChatGPT and Claude—has shattered this assumption in ways that should deeply concern everyone involved in education. Clearly, machines can pass the Turing test—humans will buy lies and nonsense spouted by a machine, though we might have to assess gullibility as a rogue variable.
The New Challenge
When people read text generated by AI language models, they consistently perceive it as meaningful, coherent, and purposeful. Students, teachers, and even researchers regularly engage with AI-generated content as if it expresses meaning and intent. Yet according to many computational experts who don’t themselves fully understand the ghost in the machine, from the perspective of a machine, these systems are essentially producing very sophisticated versions of "colorless green ideas sleep furiously"—grammatically acceptable but semantically empty.
This creates what I’m calling the meaning attribution crisis in education. Instead of where’s Waldo, where’s meaning? If humans cannot reliably distinguish between authentic human referential and intentional meaning and artificial meaningfulness, what does this say about our assumptions about language, learning, and assessment?
Why This Matters for Education
Educational systems have always depended on a basic assumption: that teachers and students can recognize meaningful communication when they see it. We grade essays based at least partly on meaning and intention, though we might vigorously debate this point. We assess comprehension through meaningful responses. We design curricula around the transmission of meaningful knowledge. We are, however, in a meaning attribution crisis.
If our intuitions about meaning are systematically unreliable—if we can be easily fooled by sophisticated pattern-matching—then several foundational practices in education become questionable:
Assessment: How do we evaluate student work when we cannot reliably distinguish between understanding and mimicry? And isn’t this a problem even in teaching without AI? How do we know that students writing as humans are not simply fabricating frames of meaning they neither understand nor value whether or not they use AI?
Literacy instruction: How do we teach critical reading when the boundary between meaningful and meaningless text has become blurred? How does one critically read a text written without intention and without semantic grounding? Can an entity even be called a text if it is meaningless at production? If it is a non-text, how can it be “read” in a human sense?
Research and scholarship: How do we maintain academic integrity when AI can produce non-text that feels meaningful even to expert readers? Medical researchers have been experimenting with AI as a time-saver to write boiler plate summaries of previous research in lit reviews. Do we come to value the meaning perceived by the reader despite its meaningless origins?
Three Possible Interpretations
Our understanding of meaning needs updating. Perhaps semantic meaning was never the mystical property we thought it was. Perhaps it’s always been artificial in the sense that language itself is a technology. Maybe what we call semantic meaning has always been about pattern recognition, coherence, and contextual appropriateness—things that advanced AI can genuinely achieve. Perhaps we need to theorize levels of meaning in the way we have theorized levels of comprehension.
Human meaning detection is more fragile than we realized. Consider a fully coherent, well-annotated, superb articulation of the meaning of a text which is actually expressing quasi-realistic, plausible nonsense? Perhaps our ability to distinguish meaningful from meaningless language breaks down when the meaninglessness is distributed across complex, longer texts rather than concentrated in obvious examples like Chomsky's sentence.
We're witnessing a fundamental shift in what counts as meaningful. Perhaps in an age of AI-generated content, we need new frameworks for understanding meaning that don't depend solely on individual human intention and consciousness. In other words, an entire new category of meaning separate from semantic meaning might be called “simulated meaning.” We already have iconic examples like the visual image of a light switch on a wall. This switch expresses meaning while having no consciousness nor intention—or can we agree that meaning applies in this instance?Perhaps communication involving systems and methods not human in intent and origin requires a new field of artificial semantics.
Implications for Educational Practice
None of these interpretations suggests we should panic or abandon current educational practices. Instead, they point toward necessary adaptations:
Enhanced metalinguistic awareness when reading simulated non-texts: Students need explicit instruction about how language works, how meaning is constructed, and how to critically evaluate the sources and authenticity of texts. If language is a technology, we might want to rethink rather romantic and mushy constructs like “voice.”
New forms of assessment: We may need evaluation methods that focus less on final products and more on thinking processes, asking learners to examine and express their thinking during readings of various text and non-text types since AI can increasingly produce sophisticated final products. Show your work.
Redefined literacy skills: Critical literacy must now include the ability to identify, analyze, and appropriately use AI-generated content.
Moving Forward
The Chomsky problem—the gap between grammatical correctness and meaningful content—has evolved from a theoretical curiosity into a practical challenge that affects every classroom. Rather than seeing this as a crisis, we could decide to view it as an opportunity to develop more sophisticated, nuanced approaches to language and meaning.
It’s impossible to return to a simpler time when the boundaries seemed clearer. The field needs educational approaches rooted in sound philosophies that prepare students and teachers to teach and learn in a world where the nature of meaning itself is being redefined by technology and may need future refinement.
Our students will inherit a world where human and artificial intelligence co-labor and compete in the realm of language. Our responsibility is to ensure they have the conceptual tools to thrive in that world, starting with a clear-eyed understanding of what we do and don't know about meaning itself.
This analysis suggests we need sustained dialogue about the changing nature of semantic meaning in an AI-saturated world. Yes, we might need to talk more and argue less. The stakes are too high for education to react by oversimplifying and commodifying answers (cf: TurnItIn); we truly can proactively engage with these fundamental questions about language, learning, and human understanding—not just in academic journals, but in middle and high school classrooms.
What if the future of education depends less on teaching what words mean, and more on teaching what meaning means?

You have left me with a new conundrum. I am trying to reconcile the meaning crisis per Vervaeke and now AI "creates" a new crisis of masquerading intentionality perceived as meaningful. I once failed a math exam where I had all the right answers but did not show my work.
Were my answer empty of intention?
An inspiring article but you had me at the mention of Chomsky.