You may be coming from a place where grammar has felt like an unwelcome guest in your teaching career, an irritant you could not not invite to the party. There they sit in the corner of the room, never asked to chat, steering clear of the spiked punch, smiling politely somewhere between a subjunctive mood and a once-perfect tense.
Grammar is perceived as tedious and demotivating by both teachers and students. Teachers struggle with large, mixed-background classes, time constraints, and a staggering lack of coherent theoretical approaches to grammar instruction—not to mention a worthwhile curriculum. Why does nothing ever change in grammar? It doesn’t age. Mathematics changes. Biology changes. Grammar? As old as dirt and about as interesting.
Who cares? When two nouns go walking, the first one does the talking. I’m good.
Students, on the other hand, may view grammar as intimidating, puffed up like a peacock, and, finding no use for it whatsoever, irrelevant. It’s another case of what happens in school, stays in school—the Vegas theory. A lot of happy nonsense takes place after the pledge of allegiance. Best forget about it when you are free for the day.
Studies dating back forever have glimpsed how the shadows of grammar dim the lights of motivation. From a 2011 study1 on learning English as a Foreign Language:
“The English teacher is often portrayed as an ‘unattractive grammar monger’ whose only pleasure in life is to point out the faults of others (Baron, 1982, p. 226). For the most part, within the classroom, any mention of grammar causes the student moments of discomfort and sometimes even terror.”
While the effectiveness of sentence diagramming for improving writing skills has been debated, some studies suggest it may have modest benefits for certain aspects of writing. For example, research indicates that diagramming sentences can help students understand grammar concepts and sentence structure, which may indirectly contribute to improved writing. My personal experience is at odds with the world on this count. I loved diagramming sentences in middle school. I still instinctively diagram misbehaving sentences. But then I am used to being the odd man out. Fortunately, we don’t have to rely on my unique experiences to generalize. We can turn to scientific research.
***
Researchers have shown that explicitly teaching grammar can be practical and helpful if it is taught in authentic contexts. A lot of this research comes from the field of EFL, English as a Foreign Language. Native English speakers don’t need it. They already use English grammar perfectly well if they are understood. They just don’t understand how it works enough to make conscious use of what they know intuitively and tacitly—sort of like riding a bicycle or tying your shoes. You don’t think about it. You do it.
Learning English from the ground up as a foreign language is a different kettle of fish. There is no tacit knowledge of English grammar, no intuition internalized by way of a flood of infant experiences in using English. How to ride that bicycle in English, well, having memorized the conjugation of modal auxiliaries won’t get you to the corner store for a gallon of milk.
Some teachers in Ethiopia2 tried something different in 2022. Instead of teaching grammar as a bunch of rules students need to memorize ("this is a plural noun, is this is in agreement?”), they taught grammar through real, meaningful writing tasks touching students' actual lives and experiences.
The results were pretty striking. The students who learned grammar through this meaningful approach wrote significantly better paragraphs than students who learned through the traditional way. Why?
It seems that when students could see how language choices functioned differently to help them actually express their own ideas about things they cared about, the grammar suddenly made sense. It wasn't tyrannical and idiosyncratic anymore. Grammar was something to use, not just follow. They could use it to communicate more effectively. The teachers found that students became more motivated to write when they could use language patterns to accomplish their intention. Grammar wasn't the unwanted guest in class anymore; it became useful, interesting, and students wanted to know it.
Most importantly, the study showed that grammar doesn't have to be taught as "follow these rules or else." When presented as "here are tools that help you say what you want to say more effectively," students naturally improved their writing without grammar anxiety.
***
In America, despite waves of innovation, despite decades of research showing functionality trumps analyticallity (how’s that for making grammar work for me), traditional grammar hasn’t budged from its throne. It’s still the uninvited guest that rains on the parade. Ironically, traditional grammar is probably the least coherent model we have. And the models keep coming. Lately, I’ve been studying Word Grammar, championed by Richard Hudson, who offers a more dynamic model. A few years ago I discovered his work more or less serendipitously.
Word Grammar emphasizes dependency relations between words rather than putting labels on them, classifying them in categories, and surrounding words with rigid phrase structure rules, and presents language as a network of interconnected knowledge—not as hollow vessels regardless of their meaning that can be pluralized or inflected or fill certain slots. This cognitive-linguistic framework integrates grammar with general knowing and thinking, making it distinct from traditional rule-based methods. Think of traditional grammar as being like learning all the parts of a car engine without ever discussing what makes the car actually go somewhere. You have your noun subject, your verb predicate, and your noun object: “The dog bit the man.” What do we gain from labeling subject, verb, object? How does the darn thing work?
Hudson discusses “functional network representations” of connections between meanings of words, for example, representations of dependency. In Word Grammar, instead of parsing a sentence like "The dog bit the man" into phrase constituents, Hudson sees this string of words as a network of word meanings directly connected to each other through dependencies.
In this example, “bit” is the root of the meaning (the central node in the network). “Dog” depends on “bit.” Without “bit,” “dog” is functionally useless, an abstraction for all practical purposes—just as “bit” is functionally meaningless. “Bit” happens. “Dog” means something because it “bit.” Likewise, “man” depends on “bit,” as isolated and useless as “dog” when you take it out of the network.
So examining relations of dependencies in a string of words isn’t the same as labeling the parts of speech and diagramming their syntactic structure. Unlike traditional tree diagrams that include phrase nodes (NP for “noun phrase,” VP for “verb phrase”), Word Grammar shows direct semantic connections between words. Word Grammar puts the lie to those who argue against the syntactic cueing system as a potent source of information during word decoding. “Dog” is far more than a collection of phonemes, and it becomes even richer when it depends on the word “bit.” This way of thinking creates a flatter, more economical representation where each word in the utterance corresponds to one node in the fabric of meaning, with arrows showing dependency relationships.
Another functional network representation is “default inheritance.” For example, with irregular verbs like the verb “bite,” the word “inherits” general properties of verbs (having tense, taking subjects). By default, the sentence should read “the dog bited the man.” But in the representation network “bite” through experience overrides the default method of past tense formation pattern of adding -ed. Having a default method governing thousands of verbs serves the language user by amplifying a regularity while calling on rote memory of items that don’t default to -ed.
In the network, "bite" connects to the general verb category while inheriting its past form, except the actual form is irregular. Language has a lot of irregularities which are much more easily learned when the default pattern is mastered performatively. The inheritance pattern allows functional grammar to represent both regularities and exceptions within the same network structure, mirroring how our cognitive system likely processes language information. “Bite” is one of those nodes that inherits other grammar markers. What’s the big deal?
Word Grammar's approach has important implications for teaching language structure. It directly seeks relationships between words rather than monitors for violations of abstract structures and rules, making it potentially more intuitive for learners to grasp how words function together to create meaning. When my friend from China says “The dog bited the man,” I don’t hear error, I hear competence with Word Grammar, just missing a fairly trivial default bypass. The important work of communication is solid.
So why get all hot and bothered about “The man was bited by the dog”? We’ve moved spark plugs around, but everything works. The car starts. “Bit” is still the central node, “man” and “dog” are still dependent nodes, the central node inherits a different form (“was bited” vs “bited”) when the dependencies change vis a vis the “by” contingency, but “bited” still does as good as job as “bit.” Why spend two weeks learning to classify, label, and diagram grammatical and syntactic structures and then learn that “the dog bit the man”?
Functional grammar flips the approach. It starts with what language actually does in the head (Richard Hudson) and in the world (Michael Halliday). Here's the simple version. Traditional grammar asks: "Is this sentence correct?" Functional grammar asks: "What is this sentence doing?" At its core, functional grammar recognizes that we use language to communicate information and ideas, to connect with other people, to collect and organize our thoughts in ways that make sense to us and to others. We don’t need conscious fingertip knowledge of abstract, traditional grammar for those purposes.
***
When your students write or speak for authentic purposes with sincere intentions, they're not following directions or rules. They’re making choices that affect how their message is received. For example, saying "The ball was thrown" versus "John threw the ball" creates different effects and foregrounds different things. There is a difference in meaning. The same is true in “The judge did not believe the witness” vs. “The judge decided the witness lied under oath.” For one thing, “believe is a “mental verb” while “decided” is a “material verb.”
In teaching students to work with AI tools, this functional perspective on grammar is valuable because it focuses them on the fact that the way they phrase a request to an AI system directly shapes what they'll get back. Learning to prompt is not about finding the "correct" way to ask, it’s not about memorizing syntactic structures, it’s not about searching through a list of recommended prompts, but about making language choices that mean what you intend to say.
Michael Halliday, a seminal 20th century thinker in psycholinguistics with whom Richard Hudson worked, constructed a more complex, robust model of functional grammar than Hudson’s Word Grammar, but both systems of thinking can transform students' relationship with language and improve not only their reading and writing, but their skill in wrangling with the bot to get from language machines what they need to fulfill their intentions.
Halliday developed Systemic Functional Grammar (SFG), which views language as a social semiotic system primarily concerned with how people use language to accomplish goals and make meaning in social contexts. His approach is thoroughly functional, focusing on language as a resource for meaning-making in social contexts.
Hudson developed Word Grammar (WG), which conceptualizes language as a cognitive network where knowledge through and about language is integrated with other types of knowledge, to show how the individual word as the central unit pushes and pulls thought toward other word nodes that have dependency and inheritance relationships rather than phrase structures.
The key differences between these approaches are threefold: 1) structural organization, 2) purpose and function, and 3) context and use. Halliday organizes language into hierarchical systems of choices, with the clause as the central unit of analysis. His model employs phrase structure and a ranked hierarchy of units (morpheme, word, group, clause). Hudson rejects phrase structure in favor of dependency relationships between individual words. He views grammar as a network where words directly depend on other words without intermediate phrasal nodes.
Halliday focuses on three metafunctions of language: ideational (representing ideas about experience), interpersonal (enacting social relationships), and textual (creating coherent discourse). I see clear links between this topic-level structure and interactions with language machines: Ideational is exploratory, elaborative, generative; interpersonal is rhetorical, participatory, intersubjective; and textual is compositional, interwoven, comprehensible. Hudson is more concerned with cognitive representation and processing, focusing on how language knowledge is stored and activated in the mind.
Halliday places great emphasis on the social context of language use and how grammar serves social functions, with register and genre being central concepts. Hudson acknowledges social dimensions but focuses more on cognitive structures and mental representation, emphasizing the individual's knowledge network. Hudson’s model rests upon the foundation of language as a social phenomenon but isn’t designed to do what Halliday did. In a sense, Halliday is a material model while Hudson is as mental model.
Halliday’s approach lends itself to teaching grammar as a meaning-making resource, helping students understand how different grammatical choices create different meanings in context. Hudson’s approach may be more suitable for teaching grammar as a system of relationships between words, highlighting pattern recognition and connections within language. I view Halliday as a longterm project, perhaps better taught systematically over a period of semesters in AI theory and writing courses. Hudson’s work is instructor friendly and could easily find its way across the curriculum as a down-and-dirty prompting bootcamp for novices approach.
Both theorists differ from Chomsky's approach to formal grammar and his concept of language as a specialized cognitive faculty. While Chomsky proposes an innate language acquisition device and emphasizes universal grammatical principles, Hudson and Halliday focus more on language as an integrated part of general cognition and social interaction. They embrace meaning and function over form, an orientation which is critical for using language machines. These machines respond to patterns of meaning embedded in language, working with nodes as Hudson would have it. Grammar is a system of sociocognitive relationships rather than a formal set of rules derived from a universal language acquisition device.
***
For teaching LLM prompting, both approaches offer useful insights. Halliday helps students understand how different grammatical and genre structures achieve different communicative purposes, valuable for crafting prompts that effectively communicate broad intent to an AI system. Consider this example of a “project prompt,” a relatively new function of AIs which allows users to upload documents that can be consulted during a string of chats. AIs now have the capacity to weave together isolated chats over time, a serious weakness in the technology just a short time ago when chats were autonomous and disconnected. Here is a project prompt I’ve been developing to enrich a line of thinking I’ve been working on for several months now.
Project Instructions: Brain and AI Language Metaphors Exploration (Created with assistance from Claude 3.7)
I'm embarking on a long-term exploration of the metaphors we use to understand both the human brain and language models. In our ongoing conversations, I'd like to establish the following framework to guide our discussions:
Project Goals:
Map the historical evolution of brain metaphors across scientific and popular discourse
Analyze contemporary metaphors used to explain large language models
Compare these metaphorical frameworks to identify connections, divergences, and blind spots
Develop new metaphorical models that might better capture both systems
Session Structure:
For each chat, I'll specify which aspect of this project we're focusing on. Please begin each response by briefly summarizing our progress thus far to maintain continuity between sessions.
Research Approach:
When exploring brain metaphors, please draw from neuroscience, psychology, philosophy of mind, and cultural history
For language model metaphors, incorporate perspectives from AI research, computational linguistics, and public discourse
Always distinguish between literal mechanisms and metaphorical explanations in your analyses
When appropriate, use diagrams or visual representations to illustrate metaphorical frameworks
Content Organization:
Help me maintain a structured catalog of the metaphors we discover
Categorize metaphors by historical period, discipline, functionality, and explanatory power
Identify the entailments and limitations of each metaphorical framework
Track recurring patterns and novel innovations in metaphorical thinking
Analytical Framework
Please evaluate each metaphor using these criteria:
Historical context and origin
Implicit assumptions about agency, consciousness, and intelligence
Technological influences on the metaphor's construction
Explanatory gaps or contradictions
Cultural and ethical implications
In our conversations, adopt a collaborative, exploratory stance that balances scholarly rigor with creative thinking. Feel free to suggest connections I might have missed or raise questions that could deepen our investigation.
This mega prompt reveals a number of elements visible through the lens of Halliday’s model as follows. Note that I pasted the above mega prompt into Claude together with background information on Halliday’s theory and received this:
Analysis of the Prompt Through Halliday's Systemic Functional Grammar Lens (Created by Claude 3.7)
The prompt for the brain and AI language metaphors project demonstrates several key elements of Halliday's Systemic Functional Grammar approach. Here's how it reflects Halliday's theory:
Halliday’s Three Metafunctions
Ideational Function
The prompt clearly establishes the content field (brain and AI metaphors) and sets up conceptual frameworks for organizing knowledge. It uses specialized lexical items that create a semantic domain around metaphor analysis ("metaphorical frameworks," "entailments," "explanatory gaps"). This reflects Halliday's focus on how language represents experience and ideas.
Interpersonal Function
The prompt constructs a collaborative relationship between the participants through:
Inclusive language ("our ongoing conversations")
Clearly defined roles and expectations
Balanced power dynamics ("adopt a collaborative, exploratory stance")
Modality choices that suggest possibility rather than obligation ("feel free to suggest")
These choices align with Halliday's view that language establishes social relationships and roles.
Textual Function
The prompt is organized into cohesive sections with:
Clear thematic progression
Heading structures
Parallel grammatical constructions in list items
Lexical chains that maintain continuity (metaphor → metaphorical frameworks → metaphorical thinking)
This organization reflects Halliday's emphasis on how language is structured to create coherent text.
Register Variables (Context of Situation)
Field
The prompt defines a specialized academic domain focusing on metaphor analysis across cognitive science and AI. The lexical density and technical vocabulary establish an intellectual project with specific parameters.
Tenor
The prompt establishes a professional but collaborative relationship, positioning both participants as contributors to a shared intellectual endeavor. The formality level is academic but not distant.
Mode
The prompt is written in a planning/instructional genre, structured as a framework document that will govern future interactions. It uses organizational features (headings, bullet points) to support its function as a reference text.
Genre and Social Purpose
The prompt belongs to the instructional/procedural genre with elements of academic research planning. Its social purpose is to establish parameters for ongoing intellectual collaboration—exactly what Halliday would focus on when analyzing language as social action.
The language choices throughout demonstrate awareness of how grammatical structures serve the social function of establishing a productive collaborative relationship for knowledge generation, perfectly aligning with Halliday's view that language is fundamentally a resource for meaning-making in social contexts.
Hudson does not have the architectural breadth of Halliday’s model. Its strength lies in its utility for sharpening work-a-day prompts. For example, I’m preparing to read a neurological study of the brain and know that I have such limited background that I can’t be confident in any understanding I might think I take away from it. I need to develop a semantic framework for approaching the article.
I see Hudson as more useful for scatter-shot prompting, off-the-cuff in-the-moment prompting—prompting in the wild like I would need to encounter the article mentioned above. In my opinion, Hudson could be valuable in teaching students to read bot output not just critically for accuracy and validity, but also to examine the relationships among prompt(s) and bot activity, a topic for another post. Here I’ll give you an example of a draft of a prompt for you to evaluate, and then I’ll share the evaluation provided to me by Claude:
How did the ancient Greeks use metaphors for the brain?
The central node is, of course, “use” with two dependencies (Greeks and metaphors) and several default inheritances. For one thing, when combined with its second dependency, metaphor, the default inheritance impacting “use” restricts the meaning to “express” rather than, say, “build” or “take advantage of.” By dependency on “ancient Greeks,” the verb inherits the default “cultural spread” and “historical context.” This analysis carried out in minutes, provides insight into serious flaws. “Use” is extremely weak as the central node carrying the burden of my intention.
“Formulate” would be better; “formulate” calls for a dependent actor who is systematic and methodical, perhaps constraining the bot to scholarly philosophers. “Interpret” is also possible, but its actor dependency could suggest an even more narrow exploration, looking just for ancient Greeks who explicitly set out to “interpret” brain metaphors. “Envision” may be worth a try; this verb resonates with both dependencies: “Ancient Greeks” can certainly “envision” without “formulating” or “using” or “interpreting.”
How did the ancient Greeks use metaphors for the brain?
Claude 3.7 response: I'll explore how the ancient Greeks used metaphors for the brain. This is a fascinating topic that shows how early understanding of cognition developed through metaphorical language.
The ancient Greeks employed several influential metaphors to conceptualize the brain and its functions. The predominant metaphor in early Greek thought was actually not centered on the brain at all, but rather on the heart as the seat of thought and emotion. However, as medical and philosophical understanding evolved, brain-centered metaphors began to emerge.
One of the earliest and most significant brain metaphors came from Alcmaeon of Croton (5th century BCE), who compared the brain to a central governing authority. He described it as the "ruling organ" or "hegemonikon" of the body, similar to how a monarch or council would rule a city-state.
Hippocrates (460-370 BCE) advanced the understanding of the brain by comparing it to a vessel or container that held essential fluids. In his work "On the Sacred Disease," he rejected supernatural explanations for epilepsy and instead attributed it to disturbances in the brain, which he metaphorically described as overflowing or becoming too moist.
Plato (428-348 BCE) used a metaphor of the brain as a wax tablet in his dialogue "Theaetetus," suggesting that memories and knowledge were like impressions made in wax. The quality of the wax (soft, hard, pure, mixed) corresponded to different learning abilities.
Aristotle (384-322 BCE), interestingly, reverted to the heart as the center of thought, but used hydraulic metaphors for how sensory information would be processed and cooled by the brain before reaching the heart.
Later Greek thinkers, particularly the physician Galen (129-210 CE), compared the brain to a complex machine with pneumatic properties, where "psychic pneuma" (a type of vital spirit) flowed through ventricles and nerves as if through pipes or channels.
These metaphors reveal how the Greeks gradually developed more sophisticated understanding of the brain's functions, even without modern technology or neurological knowledge, using familiar concepts from their environment to make sense of this mysterious organ.
*****
How did the ancient Greeks formulate metaphors for the brain?
Claude 3.7: The ancient Greeks developed several interesting metaphors for the brain and its functions. I'll explain their key conceptualizations based on my knowledge of classical texts and medical history.
Ancient Greek thinkers formulated diverse metaphors for understanding the brain, with different philosophical schools proposing competing models:
Hippocrates and the Hippocratic school viewed the brain as the seat of intelligence and emotions. They compared it to a governmental center that ruled the body, describing it as the "acropolis" of the body - using the metaphor of a fortified citadel overlooking and governing the city-state.
Aristotle, however, saw the brain primarily as a cooling mechanism for the blood. He formulated a metaphor comparing it to a radiator or cooling device that regulated the "heat" of the heart, which he considered the true center of intelligence.
Plato used the metaphor of a charioteer (reason) controlling two horses (spirit and appetite) to describe the mind, though he didn't specifically localize these functions to the brain.
The Pythagorean tradition described the brain using musical metaphors, suggesting that proper thinking resulted from harmonious "attunement" of the brain, similar to a well-tuned lyre.
Medical writers like Galen later developed hydraulic metaphors, describing the brain as containing ventricles or chambers filled with "pneuma" (vital spirit) that flowed through the brain like water through channels.
These metaphors reflected their emerging understanding of brain anatomy and function, as they tried to make sense of this complex organ using concepts familiar to their worldview.
***
How did the ancient Greeks envision metaphors for the brain?
Claude 3.7: I'd be happy to explore how ancient Greeks conceptualized the brain and their metaphors for understanding it. This is a fascinating topic at the intersection of ancient science, philosophy, and metaphorical thinking.
The ancient Greeks had several interesting metaphors and models for understanding the brain and its functions:
Hydraulic model: Some Greek thinkers, including the Hippocratic school, viewed the brain through a hydraulic metaphor. They imagined the brain as a vessel containing fluids (humors) that could flow, mix, and sometimes become imbalanced.
Cooling system: Aristotle famously viewed the brain not as the seat of thought but as a cooling system for the body's heat generated by the heart, which he considered the center of consciousness.
Wax tablet: For understanding memory, Plato used the metaphor of the brain as a wax tablet that could receive impressions (memories) that might be preserved or might fade over time.
Chamber or ventricle model: Later Greek physicians like Galen conceptualized the brain as having chambers (ventricles) that housed different mental faculties.
Pneuma vessel: Some Greek thinkers considered the brain as a container for pneuma (vital air or spirit), which was thought to be responsible for sensation, movement, and thought.
Changing one word—Hudson’s central node—changed the dependent relations between the first and second dependencies and thereby changed nuances of default inheritance. If we had changed a different word or added a modifier, default inheritance would have been unpredictably changed. What happened is this: The differences among these responses likely stem from the slight wording change in each prompt.
The first prompt asked for a comprehensive historical overview, while the second emphasized different schools of thought, and the third focused specifically on metaphorical frameworks. Each response provides insights, but they prioritize different aspects of Greek thought on the brain. The first is most historically comprehensive, the second highlights competing philosophical schools, and the third organizes information by metaphor type rather than chronology or thinker.
For my purposes, the third output is much more valuable than the other two. Their combined insights suggest that effective grammar instruction for work-a-day prompting would focus on both functional meaning-making and the network of relationships between words that shape how language is processed. I’m still left with the work of verifying that these assertions have been humanly established as factual, but I have a place to start. For sure, I could have found these metaphors through serendipity or through Google or through asking a colleague who is expert in this area. But in the time it takes to cook spaghetti noodles, I have a starting point.
"President Trump signed the executive order," the reporter wrote in an article about the Alien Enemies Act. Here, the grammar focuses the reader on the dependency (Trump) and the action (signing). The active voice construction places responsibility clearly on the president as the doer of the action.
"The executive order was signed on Friday," brings about a thought reversal, drawing attention to the result of the action, completely eliding the agent. This passive construction shifts focus away from who performed the action to merely what happened.
This shift in focus became particularly relevant when Trump later claimed, "I don't know when it was signed, because I didn't sign it.” The passive construction "it was signed" aligns perfectly with Trump's subsequent denial of personal involvement, as he further deflected agency by stating that "other people handled it.”
The material focus of these statements—the executive order, the signing, the president, Friday—centers on physical objects and actions. The signing was a bodily action using physical objects (pen, paper) to accomplish a policy goal. The grammatical choices in both constructions significantly impact how readers perceive responsibility and agency in this politically consequential event.
Each choice reveals a different perspective on the same event. By helping students understand these choices, we equip them to control how they present information and influence readers.
Ask students to highlight central nodes and dependencies in bot output—or have them instruct the bot to do this highlighting. For example, here is a prompt I wrote for the bot: “Pull apart the differences between Halliday and Hudson and stuff them into equal size baskets.” You don’t need to see the actual output, but you do need to see my next prompt: “Now assess your use of mental action words in that output in terms of variety, specificity, and clarity.”
I’ll leave you with the bot’s self-assessment and a few encouraging words.
Claude 3.7: Examining my use of mental action words in the previous output:
## Variety
My comparative analysis contains limited variety in mental action verbs. I primarily used basic structural verbs like "prioritizes," "centers," "uses," "employs," "focuses," "organizes," "draws from," and "treats." This creates a somewhat repetitive pattern across the baskets, lacking nuance in how I characterized the cognitive dimensions of both linguistic theories.
## Specificity
The mental action words I chose are moderately specific to linguistic analysis but lack precision in describing how Halliday and Hudson conceptualize the cognitive processes involved in language. For example, rather than simply stating Hudson "employs a flat dependency network model," I could have used more precise cognitive verbs like "conceptualizes," "visualizes," or "maps" to better capture the mental modeling involved.
## Clarity
While the mental action words I used provide basic clarity about the contrasting approaches, they don't fully illuminate the different cognitive mechanisms each theorist proposes. For instance, when describing Halliday's metafunctions, I missed the opportunity to use more precise terms like "interprets," "construes," or "processes" to clarify how these metafunctions operate in the mind according to his theory.
## Overall Assessment
My comparison would have benefited from a richer vocabulary of mental action words that more precisely capture how each linguist theorizes about language processing and representation. Using more varied and specific cognitive verbs would have conveyed not just what the theories contain, but how they envision mental engagement with language functioning in humans.
In connecting the functional grammar models of Hudson and Halliday to AI prompting, we find ourselves stumbling onto a paradigm shift. The uninvited guest of traditional grammar—with its peacock-like strutting of rules and categories—offers little help when wrangling with the bot. What matters instead is Hudson's web of word dependencies and Halliday's architecture of meaning-making.
When we prompt these language machines, we aren't following arcane grammatical commandments but creating webs of meaning where a single word choice ripples throughout the entire system. My own scatter-shot experiments with "use," "formulate," and "envision" revealed how changing just one central node reshapes the entire landscape of the response. This isn't about memorizing which modal auxiliary goes where—it's about seeing how language actually works in the wild.
Halliday's approach gives us the big picture for mega-prompting, that long-term relationship with the bot where ideational, interpersonal, and textual metafunctions become the pillars of our architectural design. Meanwhile, Hudson's more granular focus feels tailor-made for the work-a-day prompt, those in-the-moment linguistic nudges we give the machine. The difference between asking how ancient Greeks "used" versus "envisioned" metaphors for the brain isn't mere stylistic fussiness—it's like switching out an engine part that changes how the whole machine runs.
Teaching grammar through these functional lenses doesn't just improve writing—it's the Vegas theory in reverse. What happens in the classroom actually matters outside of it. When students grasp grammar as a meaning-making tool rather than a tyrannical rule-giver, they gain the power to bend these language machines to their will, to read bot output not just for accuracy but for the web of relationships that produced it.
Grammar isn't dead or dirt-like after all. It's alive, dynamic, and suddenly relevant in ways those traditional grammarians never imagined. In our AI-augmented future, understanding the functional relationships that give language its power isn't just another academic exercise—it's the difference between being the driver or the passenger in our interactions with increasingly sophisticated language machines. The unwelcome guest at the party finally has something interesting to say.
https://eric.ed.gov/?id=ED522689.
Denu, G., Teshome, Z., & Ferede, T. (2022). Effects of Contextualized Grammar Instruction on Students' Paragraph Writing Achievement and Their Perception Towards the Instruction: Grade 11 In Getema High School in East Wollega Zone in Focus. International Journal of Graduate Research and Review, 8(4), 103-111.