One thing is not clear to me: Does that geochemistry example reveal shortcoming of human language in describing our own perception of color, or does it point to the limitations of computing systems in sharing our color perception?
Great question, Nirav. As I interpret the chapter, the geochemistry example reveals neither a shortcoming of human language nor a limitation of computing systems, but rather illuminates the fundamental difference between how humans and machines operate with perceptual categories.
For the geochemists, language isn’t failing them—it’s doing what Goodwin argues language does. When the expert says “that’s jet black” or “not quite yet,” they’re not struggling to describe their perception. They’re using language as a tool for coordinated action in a specific context where the stakes are high (accurate water circulation data) and the judgment requires embodied expertise developed through guided practice with a master.
In other words, when the chemist says “That’s jet black,” that’s how they have learned to identify it, and it has worked effectively. It may or may not be the exact same frequency of the jet black another expert has learned, but whatever frequency the second opinion uses is ok because that expert’s perception works as well.
Think of it in relation to the white shirt example. We call the same shirt white regardless of lighting conditions because we have learned to identify a white shirt in a variety of conditions.
The “limitation” of computing systems isn’t that they can’t perceive color accurately—they can measure spectral properties with superhuman precision. The limitation is that they can’t participate in the social constitution of meaning that makes “jet black” operationally significant in that moment, in that laboratory, for that purpose.
Goodwin’s point is more radical: human color perception itself is always already social and situated. There’s no pure perceptual experience of “blackness” that language then struggles to capture. Rather, the ability to see “jet black” at the precise moment when the reaction should be stopped is developed through social interaction—through working alongside experts who can guide attention to the relevant features.
So the example doesn’t reveal a gap between perception and language, but rather shows how perception, language, and action are co-constituted through social practice. Computing systems can’t bridge this gap because they’re not participating in the ongoing social transformation of inherited structures—they’re processing patterns outside the cultural practices that give those patterns their operational meaning.
The geochemists aren’t limited by language; they’re enabled by their participation in a community of practice that has developed ways of coordinating attention and action around subtle perceptual distinctions that matter for their work.
Taking a cue from such examples, it will be interesting to see how humans aspire to sustain their participation and evolve cognitively in an increasingly digital-driven ecosystems.
I both hope and believe we will do better, Terry...
It's so fascinating and enlightening that you constantly bring such gems of "thinking about thinking" and remind us about the shoulders we are standing upon💙
Oh, I do appreciate you, Nirav. You are letting me know I should keep on writing and sharing my thoughts. I have days where I don’t necessarily believe that 🙏
Nice essay, Terry! I'm adding Goodwin to my list of what I'm calling "process philosophy," which, for reasons you explore here, is helpful in thinking about the social aspects of cognition revealed by these new information machines we call AI. To what extent did Godwin look back to writers before Goffman, like William James and Charles Horton Cooley?
Goodwin was a student of Erving Goffman at the University of Pennsylvania and was deeply influenced by him, particularly Goffman’s theatrical metaphor. On p. 51 he cites James’ “abstract phenomenology of time” and proceeds to use it in his analysis of a mundane conversation about whether an eyelet on a dress was white and embroidered. James wrote “the practically cognized present is no knife-edge, but a saddle-back…from which we see in two directions into time”; Charles writes that the structure James offered ”has been given more definite shape through the the emerging organization of language the participants are using to build their action together.” He feels that James perspective on the present moment is individualistic but does not preclude two people riding in the same saddle, sharing glances into the past and the future. He doesn’t cite Cooley. On p. 15 in his “brief overview” he discusses Goffman’s idea of a “state of mutual monitoring” during face-to-face conversation in the moment as a crucial boundary around conversational analysis and then extends it to include “sedimented history” as an element of interactions involving predecessors “…mov[ing] beyond action within a state of copresence.” He develops this idea in a really cool study (chapter 16) of an airline worker monitoring incoming flights at an airport rio make sure two planes never head for the same landing slot. This chapter synthesizes James’ bidirectional gaze in rhetoric saddle (or on a ship as James’ extends the analogy) and Goffman’s “mutual monitoring” to include past actors in historical sediment (he uses the term substrate a lot) and shows how signs and words written by other airport actors are crucial to on-the-spot decision making among the rank and file off airport workers—just the kind of insight Donald Trump lacks:)
Re: "They lack the ability to read the in vivo context, to anticipate operational needs, and to dynamically transform color terms into tools for joint action." If the context is detectable and the needs are forecastable, AI could probably do this, providing input that enables humans' joint action and if it does, who's to say that it wasn't a contributor to that? Also, it's because these things are hard to foresee that we'll need empirical research!
Absolutely. Omg do we need theoretically driven empirical research. The jury’s out for me re whether AI could detect the precise moment to stop the chemical reaction. My sense is that there is a moment when the fiber is going ro be as good as it gets. If it’s removed too soon, it doesn’t attract the substances needed to identify the sources of the water mixture in the river. If too long, it is prone to error. I infer that error is what the jet black decision is trying to minimize. It’s tantalizing to think AI could do this. It may have to do with the human capacity to see into the immediate past and the immediate future which I’m not sure AI can do as well. I still don’t trust autonomous vehicles on the road for that reason, but I’m nor expert in it.
Thought provoking post.
One thing is not clear to me: Does that geochemistry example reveal shortcoming of human language in describing our own perception of color, or does it point to the limitations of computing systems in sharing our color perception?
Great question, Nirav. As I interpret the chapter, the geochemistry example reveals neither a shortcoming of human language nor a limitation of computing systems, but rather illuminates the fundamental difference between how humans and machines operate with perceptual categories.
For the geochemists, language isn’t failing them—it’s doing what Goodwin argues language does. When the expert says “that’s jet black” or “not quite yet,” they’re not struggling to describe their perception. They’re using language as a tool for coordinated action in a specific context where the stakes are high (accurate water circulation data) and the judgment requires embodied expertise developed through guided practice with a master.
In other words, when the chemist says “That’s jet black,” that’s how they have learned to identify it, and it has worked effectively. It may or may not be the exact same frequency of the jet black another expert has learned, but whatever frequency the second opinion uses is ok because that expert’s perception works as well.
Think of it in relation to the white shirt example. We call the same shirt white regardless of lighting conditions because we have learned to identify a white shirt in a variety of conditions.
The “limitation” of computing systems isn’t that they can’t perceive color accurately—they can measure spectral properties with superhuman precision. The limitation is that they can’t participate in the social constitution of meaning that makes “jet black” operationally significant in that moment, in that laboratory, for that purpose.
Goodwin’s point is more radical: human color perception itself is always already social and situated. There’s no pure perceptual experience of “blackness” that language then struggles to capture. Rather, the ability to see “jet black” at the precise moment when the reaction should be stopped is developed through social interaction—through working alongside experts who can guide attention to the relevant features.
So the example doesn’t reveal a gap between perception and language, but rather shows how perception, language, and action are co-constituted through social practice. Computing systems can’t bridge this gap because they’re not participating in the ongoing social transformation of inherited structures—they’re processing patterns outside the cultural practices that give those patterns their operational meaning.
The geochemists aren’t limited by language; they’re enabled by their participation in a community of practice that has developed ways of coordinating attention and action around subtle perceptual distinctions that matter for their work.
Well explained, thank you!
Taking a cue from such examples, it will be interesting to see how humans aspire to sustain their participation and evolve cognitively in an increasingly digital-driven ecosystems.
Omg, Nirav, this is rhetorical big question. What are humans going to do now?
I both hope and believe we will do better, Terry...
It's so fascinating and enlightening that you constantly bring such gems of "thinking about thinking" and remind us about the shoulders we are standing upon💙
Oh, I do appreciate you, Nirav. You are letting me know I should keep on writing and sharing my thoughts. I have days where I don’t necessarily believe that 🙏
Nice essay, Terry! I'm adding Goodwin to my list of what I'm calling "process philosophy," which, for reasons you explore here, is helpful in thinking about the social aspects of cognition revealed by these new information machines we call AI. To what extent did Godwin look back to writers before Goffman, like William James and Charles Horton Cooley?
Goodwin was a student of Erving Goffman at the University of Pennsylvania and was deeply influenced by him, particularly Goffman’s theatrical metaphor. On p. 51 he cites James’ “abstract phenomenology of time” and proceeds to use it in his analysis of a mundane conversation about whether an eyelet on a dress was white and embroidered. James wrote “the practically cognized present is no knife-edge, but a saddle-back…from which we see in two directions into time”; Charles writes that the structure James offered ”has been given more definite shape through the the emerging organization of language the participants are using to build their action together.” He feels that James perspective on the present moment is individualistic but does not preclude two people riding in the same saddle, sharing glances into the past and the future. He doesn’t cite Cooley. On p. 15 in his “brief overview” he discusses Goffman’s idea of a “state of mutual monitoring” during face-to-face conversation in the moment as a crucial boundary around conversational analysis and then extends it to include “sedimented history” as an element of interactions involving predecessors “…mov[ing] beyond action within a state of copresence.” He develops this idea in a really cool study (chapter 16) of an airline worker monitoring incoming flights at an airport rio make sure two planes never head for the same landing slot. This chapter synthesizes James’ bidirectional gaze in rhetoric saddle (or on a ship as James’ extends the analogy) and Goffman’s “mutual monitoring” to include past actors in historical sediment (he uses the term substrate a lot) and shows how signs and words written by other airport actors are crucial to on-the-spot decision making among the rank and file off airport workers—just the kind of insight Donald Trump lacks:)
Wonderful. Thanks for the detailed references.
Slang transitions over time. Man, that's hot, to wow, cool dude. Then copacetic, awesome, etc.
Language is alive, fluid.
It is very much alive and very much fluid. Now what happens with AI?
Re: "They lack the ability to read the in vivo context, to anticipate operational needs, and to dynamically transform color terms into tools for joint action." If the context is detectable and the needs are forecastable, AI could probably do this, providing input that enables humans' joint action and if it does, who's to say that it wasn't a contributor to that? Also, it's because these things are hard to foresee that we'll need empirical research!
Absolutely. Omg do we need theoretically driven empirical research. The jury’s out for me re whether AI could detect the precise moment to stop the chemical reaction. My sense is that there is a moment when the fiber is going ro be as good as it gets. If it’s removed too soon, it doesn’t attract the substances needed to identify the sources of the water mixture in the river. If too long, it is prone to error. I infer that error is what the jet black decision is trying to minimize. It’s tantalizing to think AI could do this. It may have to do with the human capacity to see into the immediate past and the immediate future which I’m not sure AI can do as well. I still don’t trust autonomous vehicles on the road for that reason, but I’m nor expert in it.