The hyphen in "Co-Operative" isn't typographical. It's theoretical.
It visually embeds the collaborative process into the word itself, marking the essential gap that must be actively bridged through joint action. Where "cooperation" suggests a completed state or harmonious outcome, "co-operative" foregrounds the ongoing work of coordination between separate agents.
The hyphen literally represents what Goodwin calls the "interactive space" where meaning emerges—not in individual minds, but in the dynamic relationship between participants. This typographical choice signals Goodwin's core insight: intelligence itself is not located in individuals but in the hyphenated space between them.
When one considers the word ‘cooperation’ as a verbal substantive, my bet is the emotional resonance of the word overwhelms its denotation. In other words, we foreground the warm fuzzies, the I’ll do my part for the good of the group,’ and background the complex social and cultural relations among the individual operators. Cooperation describes a fleeting quality of human interaction. Co-Operation describes social life twenty-four hours a day.
Introducing Charles Goodwin
Charles Goodwin spent his career as a co-operator at UCLA, a distinguished research professor of communication and a key member of UCLA's Center for Language, Interaction and Culture until his retirement in 2017. His prolific publication of seminal studies like his 1994 article on expert visioning, which spawned revisioning of professional epistemic practices across a spectrum of disciplines, is remarkably both centripetal and centrifugal in Bahktin’s sense.
His 2018 book Co-Operative Action published by Cambridge University Press represents the culmination of his lifetime research. I learned about the book serendipitously a year ago and spent six months with it, taking well-spent chunks of time to probe beneath the surface of its rich insights into language as a tool for social transformation in the moment.
With never a mention of AI, Goodwin pushes the envelope of questions on synthetic language machines and their operations that reaches to the core of machined texts in relation to human learning.
“Reuse with transformation of structures made publicly available by earlier actors,” he wrote in chapter two (p.23), accounting for the “progressive accumulation” of new structures, including knowledge, that “sits at the heart of human action.” It should not surprise that Goodwin was deeply interested in Bruno Latour’s work, who also studied the practices of scientists through an anthropological lens.
The stars were aligned in 2014 for Co-Operative Action to emerge as a book in 2018 I could order from Amazon in 2024. “With a new theoretical framework and frequently with important revisions,” Goodwin wrote in the front matter (p. xxviii), “this volume includes work done at different points in my career (it was originally solicited as a collection of my papers about my work in scientific and technical environments), as well as a new analysis. I happily thank the original publishers for permissions to reuse it here.” Goodwin died in Los Angeles from cancer in May, 2018.
What emerges from Goodwin's meticulous attention to co-operative action is a new, robust understanding of how verbal intelligence itself operates in socially situated action—not as individual cognition but as the ongoing social transformation of publicly available structures, an insight that anticipated our current struggles to understand the relationship between human learning and artificial intelligence.
Chapter 22: The Blackness of Black
I chose this chapter to illuminate what I mean by “pushing the envelope of questions” about AI. Color has long captivated scientists and philosophers of perception because it sits at the intersection of physics, neuroscience, psychology, and culture, revealing questions about the nature of reality, consciousness, and knowledge itself.
From a scientific perspective, color presents once again what philosophers call the "hard problem" of consciousness: how does the objective wavelength of electromagnetic radiation for red light become the subjective, qualitative experience of "redness"? This gap between physical properties and phenomenological experience has puzzled thinkers from Newton, who famously noted that "the rays are not coloured," to contemporary neuroscientists mapping the complex neural pathways of color processing.
The philosophical stakes are equally profound. Color challenges our understanding of where the world ends and perception begins. Is color "out there" in objects, or is it constructed by our visual system? The answer—both and neither—reveals the co-constitutive relationship between perceiver and world that underlies all knowledge.
Color constancy phenomena, where we perceive a white shirt as white under both fluorescent and incandescent lighting despite dramatically different wavelength compositions, demonstrate that color perception is an active, constructive process rather than passive noting and categorizing.
Culturally, color systems vary across societies, suggesting that even basic perceptual categories are shaped by social and linguistic practices. Studies on the Himba people of Namibia show that their language does not distinguish blue from green as Western languages do; instead, they have a collection of categories that group these colors together, affecting their perception and discrimination of these hues.
Russian speakers, whose language makes an obligatory distinction between lighter blue (“goluboy”) and darker blue (“siniy”), are measurably faster at discriminating between these shadescompared to English speakers, illustrating the influence of language on perceptual acuity.
Geochemists Co-Operate to Identify Jet Black
Goodwin's Chapter 22: The Blackness of Black demonstrates how color terms function not as simple labels for perceptual categories, but as tools for collaborative social action. In Goodwin's framework, when someone says "the black one" during a joint activity, they're actively coordinating attention, building shared understanding, and transforming the immediate social situation through the strategic deployment of publicly available linguistic resources.
The focus of this chapter’s discussion is on a senior geochemist teaching a team of geochemists-in-training how to make a particular acrylic fiber used in testing bodies of water to determine, for example, how much of the water in the seas off Puerto Rico came from the Amazon basin. These fibers must be made by the scientists, by soaking the fibers in a vat of chemicals.
When the materials turn jet back, the scientists must pull the plug on the reaction. The problem? If the fiber is taken out and put in the quenching bath before it has reached the precise state of jet black, it produces error as a measurement device. Making that decision can only be done by an expert with a solid track record, and newcomers must be trained to reuse the mental representation of the expert through guided sensory experience with a master. There is no other way to monitor the circulation of water around the globe.
Goodwin insists that theorists must not confuse “where relevant cognitive phenomena are to be found (in the brain) and structured systems of mental representations.” In other words, when we focus on consciousness, we must keep separate the experience of sensing and perceiving the color “black” from the action of naming that color.
Goodwin also insisted on the converse, i.e., that theorists must not confuse language representations with cognitive phenomena. In other words, language is a system for communicating mental representations but is not the cognitive phenomena of a mental representation in mind. In a scientist’s laboratory, colors can be sorted into discrete categories based upon instruments which detect frequencies of light waves. In the life of a geochemist—and archeologists as well—colors in the moment are too subtle and contextualized to allow reliance of physical frequencies.
Work in a variety of different fields, which Goodwin discusses, overwhelmingly suggests that these instrumental forms of discrimination and verification simply do not reflect how people use color terms in real life. Cognitive phenomena are not located in the same category as the language we use in the moment to “reuse and transform” old structures into new. The act of reusing and transforming, moreover, is not linguistic, but cognitive.
This discussion has profound implications for AI. Current machine learning systems can classify colors with superhuman accuracy, but they cannot participate in the kind of situated, collaborative color-naming that Goodwin analyzes. They lack the ability to read the in vivo context, to anticipate operational needs, and to dynamically transform color terms into tools for joint action. The "blackness of black" isn't just about spectral properties. It's about how humans use color language as a resource for building shared worlds together, moment by moment, in real time.
Five Key Theoretical Takeaways
The Separation Principle: Brain Location ≠ Cognitive System Structure
Goodwin insists we must not confuse "where relevant cognitive phenomena are to be found (in the brain) and structured systems of mental representations" which are social. The neural substrate of experiencing "blackness" is categorically different from the systematic ways we organize and deploy color concepts. For AI, this means that mimicking neural architectures doesn't automatically produce human-like cognitive systems. The organizational principles operate at a different level entirely. If this categorical difference is true as Goodwin insists, AI can never “become human.”
The Transformation Imperative: Language as Active Constructor, Not Passive Label
Language doesn't simply label pre-existing mental categories for alignment of cognitive phenomena. Its force actively constructs and transforms them in real-time social action. When someone says "the black one," they're not retrieving a stored color category but dynamically creating a tool for joint coordination. Current AI systems excel at classification but fail at this constructive, immediate, transformative dimension of language use.
The Contextual Intelligence Gap: Accuracy vs. Situated Action
Machine learning systems can classify colors with superhuman accuracy in laboratory conditions, but this misses the fundamental point. Human color intelligence lies not in precision but in the ability to deploy color terms strategically within socially situated activities. The gap isn't technical—it's conceptual, revealing AI's blindness to the social constitution of meaning.
The Reuse-and-Transform Architecture: Intelligence as Cultural Inheritance
Goodwin's core insight—new actions result from "reuse with transformation of structures made publicly available by earlier actors"—reveals intelligence as fundamentally cultural and cumulative rather than individual and computational. David Perkins, Edwin Hutchins, Jean Lave, and others have fleshed out theoretical models of communities of practice and distributed cognition that substantiate Goodwin’s perspective. AI systems process patterns but cannot participate in the ongoing social transformation of inherited structures that constitutes human learning and knowledge creation.
The Co-Operative Paradox: Individual Cognition vs. Distributed Intelligence
The hyphenated "Co-Operative" points to a profound challenge for AI. Human intelligence operates not through individual cognitive processing but through the moment-by-moment collaborative transformation of shared cultural resources. Accordingly, the path to artificial general intelligence may require rethinking the unit of analysis from individual machines and humans to distributed, culturally-embedded systems of human-machine interaction.
These takeaways collectively point to why AI, despite impressive capabilities, remain limited in their ability to participate in the kind of situated, transformative, and socially embedded intelligence that Goodwin's work illuminates.
Key Takeaways for Teachers
Reframe Intelligence as Social Transformation, Not Individual Processing
Rather than seeing learning as students acquiring discrete skills or knowledge, understand it as their participation in the ongoing "reuse with transformation" of cultural structures. Unfortunately, the transmission model of teaching and learning, a heavily controlling and evaluative model, constituting our tradition teaches human students to excel at pattern replication but not at transforming cultural structures. We teach students as if they are AIs. This co-operative misunderstanding of human learning is a serious obstacle to resolving many AI related issues.
Teach the Art of Co-Operative Meaning-Making
Goodwin's "blackness of black" example shows that meaning emerges through situated social action, not dictionary definitions nor tables and charts. In your classroom, focus on how students use language as a tool for joint coordination—in free-flowing discussions, multimodal presentations, group projects, public performances, panel discussions, and collective inquiry and exploration. These are precisely the skills AI cannot replicate because they require reading social context, anticipating operational needs, and dynamically transforming language into tools for shared understanding.
Emphasize Participation Over Performance in the AI Age
Many are talking about focusing on process, not product, but this shift requires no shift to learning co-operatively. It replicates the same teaching methodology but emphasizes a different aspect for evaluation. Whether we look at the process or the product, if we continue to control students’ cognitive behaviors, we simply reframe the product from a physical text to a series of regimented steps..
Co-operative engagement and participation removes the incentive to depend on AI. . Document how students negotiate meaning with peers, revise their thinking through dialogue, and transform their understanding through interaction. These processes reveal the co-operative dimension of intelligence that remains uniquely human.
Develop Cultural Inheritance Literacy
Help students understand how they're always building on "structures made publicly available by earlier actors." This isn't just citation. It's understanding how knowledge is cumulative, social, revisable in the here and now as a basis for action. Let their ideas emerge from ongoing social and cultural conversations. When students understand this, they can better see both what AI can do (recombine existing patterns) and what it cannot (participate in the living transformation of cultural knowledge).
Design Learning as Distributed Cognition
Move away from individual-focused assignments and assessments toward co-operative, community-based knowledge-building activities mirroring real-world situations. Create classroom environments where intelligence emerges through interaction—peer review, co-operative research projects, discussion formats that build on others' ideas. This distributed approach reveals human learning as fundamentally different from AI processing.
Teach the Contextual Intelligence Gap
Use Goodwin's insight about accuracy versus situated action to help students understand AI's limitations. A calculator can compute faster than humans, but it can't read a room to know when precision matters versus when approximation serves social goals. Similarly, AI can generate text, but it can't navigate the contextual nuances of classroom discourse, peer relationships, or disciplinary conversations.
The Deeper Educational Implication
If Goodwin is right that human intelligence operates through "the ongoing social transformation of publicly available structures," then education's core mission isn't threatened by AI—it's clarified. Our job isn't to transmit publicly available structures. Current practice can’t compete with AI's pattern-matching abilities.
Cultivating students' capacity for the kind of situated, transformative thinking that remains uniquely human and accumulates when we “labor with” one another instead of in competition, we makes the social dimensions of learning—discussion, collaboration, peer interaction—not just pedagogically useful but essential.
The question isn't whether AI will replace human learning, but when we'll recognize what human learning is: the transformation of inherited cultural resources through moment-by-moment social interaction as co-operators in action.
Thought provoking post.
One thing is not clear to me: Does that geochemistry example reveal shortcoming of human language in describing our own perception of color, or does it point to the limitations of computing systems in sharing our color perception?
Nice essay, Terry! I'm adding Goodwin to my list of what I'm calling "process philosophy," which, for reasons you explore here, is helpful in thinking about the social aspects of cognition revealed by these new information machines we call AI. To what extent did Godwin look back to writers before Goffman, like William James and Charles Horton Cooley?