I don't know David Coleman, the Rhodes Scholar etched in history as the Architect of the Common Core State Standards, personally. In this longish essay, I will vivisect ‘David’ from his role as the ‘Architect,’ severing the human being from his actions. Particular aspects of the Architect’s design are brutal, in my opinion. At the risk of sounding Trumpian, we wouldn’t have the Common Core if Linda Darling Hammond had been tapped to serve as Secretary of Education in 2008. Can there be a doubt that the Department of Education, serving as a coherent leadership body, offers more promise for peace than the Department of Defense?
Who owns the text of the Common Core? Did it come from the Architect? For the life of me, I can’t fathom how the Architect, well-learned in philosophy and literature at Oxford, got to flash freeze public schools for analysis into skeletal bureaucratic structures in the void like “grade level” or “disciplines” or “assignments” or “complex texts” and then leave out “instruction,” then use this abstraction to design a caricature of a 12-year AP syllabus starting in kindergarten or, said diffidently, a boot camp for college where nobody gives a fig what you feel or what you think, and career is just another word for nothing left to lose.
The Facts and Nothing but the Facts (well, almost)
In the summer of 2008, David Coleman, along with Gene Wilhoit, then-director of the Council of Chief State School Officers (CCSSO), made a presentation to Bill and Melinda Gates at Gates Foundation headquarters near Seattle. As context, keep in mind that NCLB had been in place for 7 years, requiring standardized testing but allowing states to set their own standards with obvious variations in expectations. The incentive was to set low standards and show more improvement over time.
The most recent Great Recession was unfolding, putting pressure on state education budgets and raising concerns about American competitiveness. As the Dow Jones plummeted over 500 points in a single September day and Lehman Brothers collapsed, states slashed education funding with brutal efficiency. California alone cut $7.4 billion from its schools, forcing districts to lay off more than 30,000 teachers. Panicked families watched retirement accounts evaporate while school superintendents made agonizing choices between cutting art programs or firing counselors. The national mood vacillated between desperation and fury as Americans witnessed Wall Street executives receiving bonuses while public school buildings crumbled.
In this climate of economic anxiety, fears about America's global standing intensified. International test scores showed American students lagging behind competitors in Shanghai, Finland, and South Korea, countries aggressively positioning themselves for the knowledge economy. Business leaders like Gates regularly cited how American 15-year-olds ranked 25th in math and 21st in science among developed nations as evidence of economic doom on the horizon.
Barack Obama was elected in November 2008 on a platform that included education reform, signaling potential policy shifts. His victory speech in Chicago's Grant Park specifically promised to "fix our schools" to prepare children to compete in a global economy. His appointment of Arne Duncan, a pragmatic reformer who had led Chicago Public Schools, as Education Secretary sent a clear signal to those of us hoping Darling Hammond, who had advised Obama’s campaign, would get the nod. His administration would not pursue significant changes.
Duncan’s department offered incentives over the first five years of Obama’s presidency that convinced nearly every state to pledge adoption of the administration’s preferred policy changes. First, 18 states and Washington, D.C., won grants from the $4 billion Race to the Top program for promising to adopt the policy changes. Then, nearly all states applied for a waiver allowing them to duck penalties for missing some of the achievement targets of No Child Left Behind in exchange for signing up. Meanwhile, Obama's transition team quietly developed plans for what would become the $4.35 billion Race to the Top competition, using stimulus funds to incentivize states desperate for money to adopt reforms, including common standards.
For Coleman and Wilhoit, in 2008, this convergence of economic fear, bipartisan concern about American competitiveness, and an incoming reform-minded administration leaning toward national standards and standardized tests, created the perfect moment to approach Gates. They weren't selling educational reform. They were offering a solution to economic anxiety and a declining American empire, all packaged in data-driven language that Gates found irresistible.
According to a convergence of media reports from the time, Gene and David told Gates and his people that the experiment with academic standards and standardized measurement was failing because the level of rigor of standards varied so wildly between states that “high school diplomas had lost all meaning." They painted a dire portrait of higher education in that "as many as 40 percent of college freshmen needed remedial classes.” They argued that "U.S. students were falling behind their foreign competitors" Perhaps most persuasively to Gates, they contended that "a fragmented education system stifled innovation because textbook publishers and software developers were catering to a large number of small markets instead of exploring breakthrough products.”
Several journalists noted at the time that Bill Gates, as the creator of the dominant computer operating system in the Western world, had a heightened appetite for changing the world:
"Can you do this?" Wilhoit recalled being asked [by Bill]. "Is there any proof that states are serious about this, because they haven’t been in the past?"
The Architect was spiritual in his quest to optimize the earning powers of young people. Coleman described this as a "holy bargain" with students that "if you meet them year by year you are indeed ready for college and career.” In another meeting in 2012, Coleman was quoted shedding light on what he intended by “career ready”:
“It is rare in a working environment that someone says, Johnson, I need a market analysis by Friday but before that I need a compelling account of your childhood.”
The Common Core State Standards (CCSS) were primarily designed for public schools to establish consistent academic benchmarks across states. However, private schools were not mandated to adopt these standards. Despite this, many private schools, including religious and independent institutions, voluntarily chose to implement or adapt the CCSS. Reasons for this included alignment with standardized tests like the SAT and ACT, which were aligned with the Common Core thanks to our friendly Architect (later to tackle the AP program), and the desire to remain competitive with public schools. Additionally, some private schools saw the CCSS as a useful framework for enhancing curriculum depth and promoting higher-level thinking skills.
The Common Core State Standards are owned by two organizations: the National Governors Association (NGA) Center for Best Practices and the Council of Chief State School Officers (CCSSO). The copyright is held jointly by these two organizations, not by individual states or the federal government. However, states that have adopted the standards have the right to use and modify them for educational purposes within their jurisdiction. The copyright includes a public license that allows for certain uses of the standards, but with specific restrictions. For example, reproduction is permissible only "for purposes that support the Common Core State Standards Initiative.” It's important to note that while the standards themselves are copyrighted, the concept of educational standards cannot be copyrighted. The specific expression and arrangement of the Common Core Standards are what is protected by copyright law.
The ownership of the Standards present a thorny problem. Clearly, at last as I see it, AI is soon going to require some sort of coherent set of standards at least within state boundaries. There is no legal process to revise nor update the Common Core State Standards. They are frozen in time and, in my opinion, unlikely to accommodate a future with AI at least not in English and the Language Arts.
Now for a more creative nonfictional look…
David Coleman's vision for education reform mirrors Henry Higgins’ vision, the character who puts lipstick on the language of Eliza Doolittle in Shaw's masterful Pygmalion. Coleman hammers students with text-dependent analysis while Higgins drills Eliza on proper pronunciation. Neither tolerates deviation. The education reformer scoffs at student feelings; the phonetician mocks Eliza's tears. "People don't give a…[fig]…about what you feel or think," Coleman declared, expressing his attitude toward learners to educators in Albany in 2012, echoing Higgins' dismissive attitude toward Eliza's struggles.
Both men seek to orchestrate external performances in their students, never mind what they feel or think—from his throne, Coleman activated administrators to activate teachers to coerce students with grades and promises of a better life later on to excavate textual evidence from complex literature and nonfiction like Coleridge’s “slaves in a diamond mine” while suffocating personal meaning and aesthetic connections; from his place of superior knowledge, Higgins bullies Eliza into mechanically parroting phrases that erase her Cockney identity. In today’s reality, one might argue that Higgins aka Coleman tried to turn their learners into artificially intelligent machines without a trace of a human background, emotions, or personal thoughts.
The Common Core architect seems to have purposely blinded himself to structural socioeconomic hardships that ambush student learning from behind, just as Higgins shrugs off the entrenched class barriers that will block Eliza's prospects despite her polished accent. As a child of poverty myself, knowing, with Wordsworth, that the child is the father of the man, I am living proof that parroting phrases in the King’s English has little to do with deep learning just as continuously arguing for my interpretation of a novel has little to do with what I’ve managed to learn. For sure, I learned to parrot with the best of them, and I’ve learned a thing or two about argumentation, but I’ve guarded against losing my native tongue somewhere between an Appalachian mumble and an Alabaman drawl. I wanted to visit my family and keep my inner self alive in my first language.
Coleman resents the ugliness of poverty-influenced responses, preferring critical, impersonal analysis of high-brow texts that sanitizes uncomfortable realities. Higgins similarly abhors the cultural markers of Eliza's background, viewing her original speech patterns as blemishes to be scrubbed rather than as valid expressions of her lived experience. Both men smugly parade their privileged perspectives as universal standards while throttling the authentic voices of those they claim to elevate. Coleman swats away emotional responses like flies that might bring germs into the classroom; Higgins sneers at expressions that betray working-class origins. Their reforms mask contempt for the very populations they profess to help.
Coleman's standardized approach bulldozes through diverse learning needs, leaving untouched inner wounds inflicted by want and need, not to mention ignorance; Higgins' rigid methods crush Eliza's confidence before reluctantly working to build it anew. Neither man's blueprint acknowledges human complexity. Both reveal their own privileged assumptions rather than illuminating paths to personal growth or social mobility. I have the same issue with current efforts to standardize “knowledge” across the curriculum, a blue print for deep programming of human beings, making sure they know both how to think the right way and what to think about.
Both men exhibit a remarkable confidence in their methods1 while showing limited interest in their subjects' existing knowledge or identities. Coleman's dismissal of what students "feel or think" parallels Higgins' indifference to Eliza's emotional responses to his teaching methods. The transformations they envision are focused on producing observable behaviors that satisfy predetermined standards rather than scaffolding personal understanding and self-expression. In their vision there is no room for real people.
Higgins aims to remake Eliza to masquerade as royalty in high society; Coleman's approach aims to prepare students for college2 without acknowledging the Coleman report (no relation I’m told) from 1966 which exposed two distinct worlds of schooling, one for minorities in poverty, the other for affluent families. Higgins’ and Coleman’s approaches reflect a technocratic belief that standardization and rigor in the classroom can overcome systemic inequalities, and both ultimately reveal more about their own class-biased assumptions than about effective methods of education as a mechanism for social mobility.
The backlash to Coleman’s sh*t comment was swift and justified. Some teachers bristled at the unprofessional tone, some at the false dichotomy between objectivity and subjectivity. Some cringed at his embarrassing tone-deafness. Some, who worked a lifetime to build classroom communities that value cultural interpersonal traditions in the writing classroom marveled at the insensitivity—I’m thinking of Jayne Marlink, my friend and colleague, who became a national leader in the National Writing Project and encourage her to comment on this point. Others couldn’t resist pointing out the hypocrisy of a man who routinely peppered his own presentations with personal anecdotes.
Unsurprisingly, in my experience talking with teachers over the years, the most damning aspect of his assertion was the anti-student sentiment—a disrespect for young people's voices and experiences. Teachers working in high-poverty communities were unsettled to learn what was at the core of the Common Core's insistence on text dependency. As the sole source of information while reading, students known to struggle were denied access to resources outside the four corners of the text. They read completely on their own in a world that had all but abandoned them already. In my content area literacy course for secondary credential candidates, I called this the “marooned alone on an island” theory of reading. It is the unfortunate reader who is alone on an island. Alas, where else can a reader be if no one “gives a fig” about their thoughts and feelings?
In fact, Coleman’s vision is theoretically and empirically indefensible. It’s with no pleasure that I make this assertion. Would that I were wrong. The remainder of this post discusses mainstream, established models of reading comprehension that were available to Coleman. Things did not have to be this way. Indeed, researchers who created these models had a direct line to Coleman. Unfortunately, he didn’t give a fig about what they thought or felt, either.
As you consider these mainstream models, think about how the highest level of reading comprehension defies the language machine. Said differently, the lower two level, what I call the Coleman levels, can be handled by bots because nobody gives a shit about what they think or what they feel or what they say. It would be really easy conceptually to fix the flawed dehumanizing assumption operating in the Core standards: Keep levels 1 and 2, integrate level 3+. Humans can manage level 3.
For bots to do level 3? Impossible. Level 3 is not and will never be achieved by LLMs.
Three Established Three+ Levels Models of Reading Comprehension
Reading comprehension research over many decades has produced several influential theoretical frameworks that conceptualize comprehension occurring across at least three distinct levels. These models, though they use different terminology, all recognize that reading involves interaction between what's on the page and what's in the reader's mind. Each model offers unique insights into how readers extract and construct meaning from texts, but they all see a reciprocal relationship between the text representing the writer and the meaning representing the reader. In none of them is the reader stuck inside the four corners of the text as Coleman would have it.
I’ve chosen these three+ models to discuss for three reasons: 1) As a classroom teacher in several Title I public schools and later as a professor in a teacher preparation program, I found these models to be familiar and useful to practitioners; 2) as a researcher and theorist, I know that these models and others that fit the concept have been central to ongoing discussions of comprehension among teachers and researchers over decades; and 3) as a professional development consultant, I’ve presented these models with some success in teacher professional development settings. We’ll start with Walter Kintsch’s Construction-Integration model. Then we’ll look at Thomas Barrett’s five-level model followed by Pearson and Johnson’s model from which Taffy Raphael’s influential and powerful QAR (Question-and-answer relationship) strategy grew. NOTE: Full-length sources are listed in the Reference List.
Construction-Integration
Walter Kintsch, a seminal cognitive psychologist, developed his three-level model in the 1980s to explain how readers process text at multiple levels simultaneously. He pointed first to the surface level that represents the exact linguistic features of the text—the specific words, phrases, and syntax that appear on the page. At this level, readers recognize vocabulary and decode sentences, but deep comprehension has not yet occurred. This representation is typically short-lived in memory unless deliberately committed to memory. Second, the text base is a semantic representation of the text's explicit content where readers synthesize the meanings of words and sentences into a more stable whole. Here, readers identify important ideas and supporting details while constructing a coherent representation of what the text explicitly states. This level involves connecting sentences and paragraphs to form a unified understanding. These are the two levels Coleman is interested in.
The most sophisticated third level is the situation model, where readers integrate the text base with their prior knowledge to create a mental model of the situation described in the text. This level transcends what's on the page by incorporating background knowledge, inferences, experiences, visualizing described scenarios, and more. The situation model represents deep comprehension and is associated downstream with better recall and application of knowledge.
To link Kitsch to Coleman, think about what happens when a reader encounters this passage from Steinbeck's Of Mice and Men. At level one, words and phrases are identified, at level two, the words and the sentences are made into a textual meaning, and at level three, the situation model activates and things get interesting:
"The bunkhouse was a long, rectangular building. Inside, the walls were whitewashed and the floor unpainted.”
At the construction level, a bunkhouse, a rectangle, white walls, unpainted floors. People sleep in a bunkhouse, in this case, the workers. Military people live in bunkhouses. A rectangle looks like a square but has unequal sides. The bunkhouse exists.
At the integration level, a reader at the time of Steinbeck’s writing might have drawn on first-hand prior knowledge of Depression-era farm buildings to visualize not just any rectangular building, but one with specific architectural features common to that time period. A reader with personal experience of being in these buildings might mentally "feel" the temperature inside such a structure, "hear" how footsteps echo on unpainted wooden floors, or "smell" the particular scent of whitewashed walls.
The reader might infer the economic status of the ranch (whitewash being cheaper than paint), the utilitarian purpose of the space (designed for function, not comfort), and the social implications (communal but impersonal living quarters). The reader back in the day may have had background knowledge of migrant farm work in the 1930s, they might have connected this sparse description to the transient nature of the workers' lives. None of these inferences are explicitly stated in Steinbeck's spare description, yet they create a rich mental representation that significantly enhances comprehension. Steinbeck himself didn’t stay up at night plotting out all this resonance. He just told his story.
Fast forward to 2025. A high school sophomore is reading this novella for English class. This reader, limited to Coleman's text-dependent approach, would be restricted to what Steinbeck directly states, processing the lexical input but duty bound not to let personal reactions or speculation enter their thoughts. What if prior knowledge of the era were built before this high school sophomore begins reading the novella with the advantages a reader alive at the time of publication already understood? “Keep inside the text,” the metacognitive voice of the Architect whispers.
This example demonstrates why Kintsch argued that complete comprehension is impossible without the integration of the reader's prior knowledge and experiences with the text—directly contradicting Coleman's insistence on text-boundedness and logical argumentation. This example shows why and adding that third level, acknowledging the power of the reader to construct a unique understanding, makes the difference. Even if a students used a language machine to learn from a prompt like “What was it like to be a migrant farm worker in the 1930s where Steinbeck set the story in Of Mice and Men?”, that student could have access to simulated general knowledge to bring to bear during the act of integrative comprehension.
Barrett’s 3+2 Model
During the 1960s Thomas Barrett developed a model of reading comprehension that became influential in the background before the cognitive revolution. Apparently, Barrett presented his taxonomy at the annual meeting of the International Reading Association in 1970, though his original paper, Taxonomy of the Cognitive and Affective Dimensions of Reading Comprehension, was never formally published. There is a more detailed account of Barrett’s Taxonomy in Theodore Clymer’s (1968) article, ‘What is “reading”? The article was part of a collection of readings for the Open University’s Reading Development course in the 1970s and copies can still be found online. It outlines five levels that progressively challenge readers' cognitive and affective understanding of texts: 1) Literal Comprehension, 2) Reorganization, 3) Inferential Comprehension, 4) Evaluation, and 5) Appreciation. The relationship to Bloom’s Taxonomy seems fairly clear, except that the highest level, appreciation is emotional, not judgmental, which comes before.
At the foundational level, Literal Comprehension focuses on recognizing and recalling explicitly stated information, such as details, main ideas, and sequences. Reorganization requires readers to synthesize or restructure textual information by classifying, summarizing, or outlining ideas. Inferential Comprehension moves beyond the text to "read between the lines," requiring predictions, interpretations, and inferences based on textual clues and prior knowledge. The higher levels—Evaluation and Appreciation—engage critical thinking and emotional responses, asking readers to assess the quality or validity of ideas and connect personally with the text. Together, these levels provide a structured framework for developing and assessing deeper reading comprehension skills.
Coleman flattens this progression considerably, primarily emphasizing text-dependent analysis without acknowledging the value of the evaluation and appreciation levels where readers make judgments based on criteria outside the text and attend to their emotional responses. Can a language machine do this sort of reading? Critical engagement differs in purpose and strategy from evidence extraction and argument making. Barrett's model, especially at the critical comprehension level, explicitly encourages readers to assess texts based on their own knowledge, experience, and understanding of external criteria and to respond in ways that are real to them.
Coleman's approach minimizes the role of reader knowledge and experience, treating them as potential distractions rather than valuable tools for deepening understanding. Barrett's taxonomy implicitly views reading as a developmental process that ultimately leads to independent critical thinking. Coleman's approach suggests readers develop by reading texts that are hard for them primarily to extract information and evidence for argumentation that will someday be useful in college. Barrett's model represents reading as a process that begins with the text but extends beyond well beyond it while Coleman's approach artificially confines readers within the text boundaries. It makes sense that someone who doesn’t give a fig about what a student thinks or what a student feels would find Barrett distasteful and unhelpful.
The Model That Found a Way to the Classroom
Pearson and Johnson's three-level model categorizes comprehension based on the relationship between questions, text information, and reader knowledge/response. This model, developed in the late 1970s, had both theoretical and practical impacts that helped researchers and teachers understand that not all questions about texts have the same relationship to the information on the page. The text-explicit level involves comprehension of information directly stated in the text, what I’ve come to think of, after Sheridan Blau, as the “brute signifiers” of a text: This text is about whales, not about the Rocky Mountains. “Who shoots Lennie at the end of Of Mice and Men?” is an example level 1 question.
At the text-implicit level, readers connect information across different parts of the text to draw conclusions not explicitly stated. “How does Curley's wife's behavior reflect her loneliness on the ranch?” is a level 2 question, one which avoids Coleman’s unnatural forcing of readers to stay inside the four corners of the text. Answers to text-implicit questions require integrating multiple sentences or passages and filling gaps the author left open through micro inferences from prior knowledge. To successfully address an implicit question, a reader engages in a mental process that begins with analyzing what the question is asking, then methodically searching across multiple sections of text to gather relevant specifics that might not be immediately obvious.
The question about Curley’s wife is not simply asking for a specific text-explicit description of her loneliness, but specifically about how her behaviors serve as evidence of that loneliness. A reader would then begin a search for examples of her behavior followed by an analysis of how these behaviors signify loneliness. A strong answer would cite specific examples from the text (her frequent appearances at the bunkhouse, her attempts to engage men in conversation, her confiding in Lennie, etc.) and explain how each example demonstrates her isolation.
From the perspective of reader response theory, students need not provide logical proof that their explanation is warranted by specific evidence. The object is not to argue, but to respond as an individual making meaning and to express the response. The difference is reader empowerment vs reader constraint. Having recently reread Steinbeck’s novella, I’m able to express my response to “How does Curley's wife's behavior reflect her loneliness on the ranch” without making a federal case of it. I’m not in any way suggesting that this is the most logical or “correct” one, college and career ready. It’s highly personal and holistic. It’s not a rhetorical essay competing to showcase brilliance in critical thinking, and it could earn maybe a “C-“ or lower in a Colemanesque classroom.
No doubt a teacher using a mastery system would give it back to me to redo. It wouldn’t matter that a few months ago, I thought about The Grapes of Wrath for some unknown reason. I stumbled onto my ratty copy of Of Mice and Men and reread it in short order. It made me thirsty to reread Tortilla Flat and then reread T.C. Boyle’s Tortilla Curtain, I think because of emotions in me stirred up by President Trump’s assault on immigrants. It’s been years since I’ve read these books, and congruent with a lifelong attraction to literature, I feel a need to reconnect with both Steinbeck and T.C. Boyle. Here is me, pretending I’m a high school sophomore in a reader response class responding to the text-implicit question:
“Curley's wife is always showing up at the bunkhouse and wandering around the farm, which is a big clue that she's lonely. You don’t have to look very far to see it. She keeps trying to talk to the ranch hands even though it's obvious they don't want her around. In my experience, people don’t keep going somewhere they're not wanted unless they’re desperate for someone to talk to, right? It makes me feel sad when I think about it. She's so lonely that she'll take any kind of attention, even if it's negative.
The way she dresses and does her makeup says a lot, too. Steinbeck keeps mentioning her red lipstick, red fingernails, and how fancy she styles her hair. She puts in all this effort to look good, but for what? It gives her a fantasy life? The guys on the ranch don't care about her. It makes me feel sorry for her and angry with the men because she's trying so hard to get someone to notice her. I really do not like men like those in the bunk house. It's like she's screaming "Look at me! I exist!" but everyone's ignoring her. No one even gives her a name in the whole book! She's just ‘Curley's wife’ like she doesn't deserve an identity. There isn’t anyone but Lennie for her to connect with.
I realize she wasn't trying to cause problems. That makes me feel angry for her. Imagine being the only woman on a farm full of stupid men who avoid you and having a husband who doesn't seem to care about you. More Trump stuff!!! I feel bad she has no friends and no one to talk to about her feelings or dreams. I feel fortunate that I do have people in my life who care about me.
The saddest part is that she just wanted someone to listen to her and connect. When she finally reveals to Lennie her dreams of being in the movies, it's the only time we see the real her. She's not trying to be flirty or get attention. She's honest about her hopes. That part made me feel like Steinbeck conceives of her as person with dreams who got stuck in a really bad situation.”
Pearson and Johnson’s most advanced level requires integration of textual information with the reader's prior knowledge or "scripts" (schemas and experiences). Script-implicit questions require textual information and a durable link to the text, but readers must activate relevant background knowledge to construct a complete answer. “What can we conclude about the relationship between men and women in the 1930s?” could be a fruitful script-implicit question.
Pearson and Johnson's taxonomy grew from schema theory (Anderson & Pearson, 1984) and constructivist views of reading that recognize reading as an interactive process between reader and text. Their work challenged purely text-based models of comprehension by emphasizing that many legitimate comprehension processes require readers to bring knowledge and world experience to the text.
The model has strong connections to transaction theory (Rosenblatt, 1978), which views meaning as emerging from the transaction between reader and text rather than residing solely in either the text or reader. By identifying script-implicit comprehension as the highest level, Pearson and Johnson validated the essential role of the reader in meaning construction, a role that can include traditional literary criticism in small doses but involves a whole spectrum of personal responses and extensions that move away from the text.
Pearson and Johnson’s model has been used effectively in teacher preparation courses and professional development to support teacher-designed activities as well as assessment tools, and an important instructional application came through Taffy Raphael's Question-Answer Relationships (QAR) framework, which translated Pearson and Johnson's theoretical model into classroom practice. Raphael adapted the three levels and expanded the strategy to a fourth level: 1) Right There, 2) Think and Search, 3) Author and You, 4) On Your Own.
David Coleman would likely agree with the text-explicit/right-there level and with the think-and-search/text-implicit level with the caveat that readers take an argumentative rather than a responsive stance toward peers and the teacher. It’s possible he may find some limited value in the Author and You level. But a non-negotiable tenant of close reading is text-boundedness. All the other models also contain the non-negotiable level of the active, empowered, critical reader reading for themselves, not to satisfy someone else’s standard. That’s where expert teaching shows up in the mix.
The Long and Winding Road to Mature Reading
Established reading comprehension models and the Common Core's close reading approach clash dramatically, exposing a stark rift between decades of reading research and Coleman's educational policy. While robust research across diverse theoretical frameworks confirms the vital importance of reader knowledge, experience, and response, Coleman's model shackles readers to the text itself, preventing them from engaging with their own interpretations and reactions.
To be fair, Coleman's intentions may have emerged from legitimate pedagogical concerns. Perhaps he encountered classrooms where unmoored personal responses eclipsed textual evidence entirely. His insistence on text-dependent analysis might represent an overcorrection to what he viewed as unfettered reader response floating free from textual anchors.
Coleman might also have targeted a genuine weakness: students entering higher education and workplaces often struggle to bolster claims with solid evidence. His emphasis on textual evidence could aim to cultivate students' ability to craft well-supported arguments—undeniably valuable in academic and professional settings. Furthermore, his approach might spring from a desire to foster educational equity by compelling all students, regardless of background, to wrestle with complex texts rather than retreating to personal connections of varying depth and sophistication. The standardization embedded in text-dependent reading might attempt to democratize the educational landscape.
Yet despite these generous interpretations, Coleman's approach collapses under scrutiny. By subjugating the reader beneath the text rather than embracing them as an active co-creator of meaning, the Common Core's vision of close reading defies not just isolated theories but the entire spectrum of major reading comprehension models developed since the 1970s. The inescapable reality? Coleman's model isn't merely insufficient—it's fundamentally impossible to implement.
No reader approaches a text as a blank slate, devoid of prior knowledge, experiences, thoughts, and feelings—even when told these elements don't matter. Effective teachers can validate students' interpretive voices while maintaining rigorous textual engagement. Modern language learning models (LLMs) could actually enhance this third level of reading comprehension rather than replacing it. Students might use LLMs to generate background information about historical contexts, explore literary movements, or examine an author's influences—all resources that enrich the situation model in Kintsch's framework without doing the essential integrative work that makes reading uniquely human. LLMs could help students build scaffolds for their own meaning-making without constructing the entire edifice of interpretation.
Even the most analytical, methodical reading remains, at its heart, a dynamic interplay between reader and text. Denying this reality isn't just pedagogically flawed—it contradicts how human minds actually create meaning from words.
Instead of isolating students on textual islands, we should guide them through the rich social, cultural, and historical territories that connect texts and readers, surrounded by empathetic peers and mentored by teachers who deeply understand and love literature. Despite Coleman's dismissive stance, exceptional teachers recognize the profound importance of what readers think and feel—perhaps the most crucial element in authentic learning. By embracing all three levels of reading comprehension and using new technologies to enhance rather than replace the human elements of reading, we might finally move beyond the limitations of Coleman's flattened vision.
REFERENCES
Anderson, R. C., & Pearson, P. D. (1984). A schema-theoretic view of basic processes in reading comprehension. In P. D. Pearson (Ed.), Handbook of reading research (Vol. 1, pp. 255-291). New York: Longman.
Barrett, T. (1972). Taxonomy of reading comprehension. Reading 360 monograph. Lexington, MA: Ginn.
Coleman, D. (2012, April). Bringing the Common Core to life. Presentation at the New York State Education Department, Albany, NY.
Kintsch, W. (1974). The representation of meaning in memory. Hillsdale, NJ: Lawrence Erlbaum Associates.
Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95(2), 163-1
Pearson, P. D., & Johnson, D. D. (1978). Teaching reading comprehension. New York: Holt, Rinehart and Winston.
Raphael, T. E. (1982). Question-answering strategies for children. The Reading Teacher, 36(2), 186-190.
Raphael, T. E., & Au, K. H. (2005). QAR: Enhancing comprehension and test taking across grades and content areas. The Reading Teacher, 59(3), 206-221.
Rosenblatt, L. M. (1978). The reader, the text, the poem: The transactional theory of the literary work. Carbondale, IL: Southern Illinois University Press.
Remarkable is a carefully chosen word. While I’m confident in my understanding of effective literacy pedagogy, my confidence is not “remarkable.” What is remarkable is that someone like me can stand on the shoulders of giants, one of whom I cite in this post. The educators I respect the most are the most humble and approachable, with a scientific skepticism that prevents them from jumping on bandwagons and blinding themselves to harsh and ugly realities in the lives of hundreds of thousands of young people in schools.
Though Coleman claimed to teach students such that a “career” would be in reach following high school graduation, there is no doubt that he meant ‘preparation for college and more college.’ See this article from the Atlantic. Coleman actually revised the SAT to align with the Core standards in an attempt to engineer a strong correlation between GPA under the Core approach and high college admissions scores. A firm believer in standardized tests, his view of the world integrates a perspective on human beings as programmable and malleable.
More superb work - thank you! Running alongside this shift to "inside the text" comes a concurrent push for more young adult texts that students can personally relate to, overlapping soon enough with further social media pushing discourse into smaller chunks, along the lines of Zuckerberg's "pivot to video" sales strategy in the mid 2010's. Put it all together...and it did seem like the student desire to read thoughtful literature exploring mature viewpoints on why we are alive and what we can do with the life we have, that desire was extinguished. Not so, thankfully, for my classes in the last two years - I've been glad to see an inquiry about works with depth, a student push towards literature with more substance.