Two well-respected researchers in the field of reading just a few months ago took a scalpel to the theoretical framework of the approach to reading instruction made palatable to policy makers by journalist Emily Hansford et al. self-titled “Science of Reading.” Download the pdf file for free here: Tierney and Pearson, 2024. They identify 10 SoR claims for phonics with evidence SoR points to in support of phonics as a silver bullet. Ruthlessly logical, painfully civil, impeccably thorough, these elder statesmen carefully and thoughtfully destroy each claim. They write particularly eloquently against the monstrosity of claim 8, the brain evidence.
I recommend this section 8 as much for its literary power as its respect for cognitive neuroscientists, who have been saying nothing like the things SoR puts in their mouths. In some ways, in the spirit of reciprocity, these ten + pages reclaim the tarnished integrity of those neuroscientists whose work has been misappropriated by SoR to exact revenge on mainstream reading researchers who watched in horror as state legislature after state legislature intruded into the professional practice of reading pedagogy.
Claim 8, the most insidious SoR claim, is this: Direct and explicit phonics instruction carried out in isolation for every child is mandated by evidence from cognitive neuroscience. It is the only way, they say. It isn’t. It simply isn’t supported. To boot, teachers who are against a strict, take no prisoners approach to phonics first and furious, may be by analogy binding the cognitive feet of children and keeping their brains from developing unique, rich network connections. The road to hell is paved with good intentions. Take a look at how this claim is articulated online on a site dedicated to teachers seeking help using the silver bullet:
“The brain before we learn how to read
When we are born, the parts of the brain that process letters (the visual cortex) are disconnected from the parts of the brain that process sounds (the auditory cortex). At this stage, the brain is capable of understanding words it hears, but not words that it sees. In order to read words, a child’s brain needs to develop new neural networks that connect the visual cortex with the auditory cortex. It needs to connect the letter it ‘sees’ to the sound it ‘hears.’”
For example, when the visual cortex sees the letter f and the auditory cortex hears the sound ‘f’ as in ‘funny’, a new connection between the visual and auditory cortex is formed. Every time a child decodes a new word, a new connection between the visual cortex and the auditory cortex is formed.”
It is irresponsible in my opinion to publish misinformation as egregious as this. In fact, scientists have been interested in what is called the “visual word form area” (VWFA) for over a century. For many years very little was known about this area, and people speculated that connections between this visual area and the auditory cortex likely developed as people learn to read. Wait a minute, says one researcher in Ohio in 2020:
“Some feel that the VWFA develops its specialization as a person learns to read. They theorize that it may begin as a region not too different from its neighbor the visual cortex, which recognizes faces. Li is the lead author of a new study that disagrees. ‘We found that isn’t true,’ says study senior author OSU psychologist Zeynep Saygin. ‘Even at birth, the VWFA is more connected functionally to the language network of the brain than it is to other areas. It is an incredibly exciting finding.’”
"When we are born, the parts of the brain that process letters (the visual cortex) are disconnected from the parts of the brain that process sounds,” wrote Mrs. Wordsmith in the earlier excerpt. Looking at this claim through the eyes of a critical discourse analyst, I am fascinated by the I-It dichotomy, the objectification of a human being’s brain. This is a dualism for sure, but not Descarte’s mind-body. ‘We’ are born, but inside us is this ‘thing’ that needs to be connected somehow, a USB cable maybe, before ‘it’ can be taught through phonics to connect words ‘it’ “sees” with words ‘it’ “hears.” The relation between the baby and ‘it’ is technical, mechanical. I wonder what other parts of the brain come needing to be constructed or assembled? I wonder how a teacher with this understanding thinks about teaching…
*****
Regardless of one’s position on phonics instruction and phonics laws, surely everyone agrees that beginning readers—like readers at any stage of development—use a) letters to represent sounds, b) word meanings stored in long term memory, and c) sentence structure as a generative tool, as resources to make sense of texts. Those of us in the field of reading recognize these as the “three cueing systems.” Because reading is a language process, readers have to engage these resources, albeit somewhat probabalistically, to build a mental model of a text.
Yet many of the United States, the province of Ontario in Canada, and all of New Zealand beginning in 2025 have passed laws restraining teachers from teaching reading to children in any other way except SoR style phonics. It’s going to be challenging to turn back the tide of misinformation about and oversimplification of both basic cognitive neuroscience and research in literacy pedagogy, but neuroscience itself may be the place to turn for support of more dynamic, responsive, improvisational, student-centered beginning reading and writing pedagogy in the future.
Launched in 2009 by the National Institutes of Health (NIH), the Human Connectome Project (HCP) is a landmark initiative in neuroscience research. The mission is to create an empirically derived comprehensive map of neural functions with their interconnections in the human brain during particular recurrent events, providing a model for future research on brain function and dysfunction in the world. Note carefully: HCP assumes the brain functions in an integrated manner. Single networks of connections patched together in repetitive loops may describe AI, but not the human brain.
The main phase of data collection ran from 2012 to 2015; data analysis and follow-up studies continued well beyond 2015. Led by Washington University, the University of Minnesota, and Oxford University, with many other institutions participating, the project scanned 1,200 healthy young adults, including many twins and siblings to afford genetic analyses.
The goal was to map the brain not by single locations (the visual cortex does letters, the auditory cortex does sounds) as though the brain is a collection of discrete functions called into action one by one. Instead, the whole brain is available at any given moments, and the whole brain is integrated to produce consciousness with an executive function running the show.
The following excerpt from a 2023 study from HCP gives you a sense of the holistic perspective on the brain at the forefront of theory and research in contrast to the single function perspective emphasized by the phonics first and furious advocates. In this case, the investigators will use imaging as a method to study “cognitive control” in 300 young to early middle aged adults (“the DMC will collect longitudinal MR imaging data for structural, resting-state and task. Some subjects will also undergo retest imaging sessions”):
“The central hypothesis of the DMC framework is that cognitive control operates via two distinct operating modes – proactive control and reactive control. The proactive control mode can be conceptualized as a form of ‘early selection,’ in which goal-relevant information is actively maintained in a sustained manner, prior to the occurrence of cognitively demanding events, in order to optimally bias attention, perception and action systems in a goal-driven manner…”
I wasn’t able to locate any findings from this study of proactive and reactive control vis a vis brain networks. The site reported that findings would be posted sometime in 2023; I’ll continue to search for them.
The 3e Cognition perspective is an important recent framework in cognitive neuroscience that challenges traditional views of cognition (I’ve seen a 4e version mentioned). 3e Cognition stands for Embodied, Embedded, and Enactive cognition. This perspective argues that cognitive processes are not confined to the brain but are deeply intertwined with the body and the environment. Embodied means that cognition is shaped by the entire body, not just the brain. Our physical experiences and bodily states influence how we think and perceive. Embedded means that cognition is situated within and inseparable from the environment.
The physical and social contexts we're in play a crucial role in shaping our cognitive processes. Enactive cognition happens during dynamic interactions between an organism and its environment. The active role of the organism in creating and enacting its experiences suggests that thinking and learning are anything but passive and reactive. Insisting that research in the 3e Cognition paradigm must be done in real-life settings, researchers are looking for portable tools to record neurological data from the body and the brain of study subjects.
The structural and functional organization of brain networks subserving basic daily activities (i.e. language, visuo-spatial cognition, movement, semantics, etc.) are not understood to date, but research is accelerating. As a novice observer of this emerging, highly technical work of critical significance not just for educators, but for all professionals directly serving human populations, my sense is that in order to have an informed opinion about policies regulating educational institutions, a prerequisite is developing a working knowledge of what imaging studies can and can’t tell us.
Terry — Thank you so much for presenting this information in your always clear, detailed, and intellectually honest way. You have exposed the cognitive dangers of the slipshod, simplistic, and highly outdated approach that has been promoted by a couple of journalists and some well-intentioned but naive conservatives who have reasoned that turning the clock back would in some way pave the way for every child to learn to bark at text as a goal, instead of gaining satisfaction from constructing meaning from literature connected to their interests — and the even more dangerous acts of enshrining the methods of the past into legislation virtually outlawing, in some cases, something as basic as a multi-cueing system approach to make reading meaningful and interesting. Thank you, again, for so clearly providing information that could positively influence the pedagogy of literacy.