“Suppose a population of tiny people disable your brain and replicate its functions themselves, while keeping the rest of your body in working order (see Block 1980a); each homunculus uses a cell phone to perform the signal-receiving and -transmitting functions of an individual neuron. Would such a system be conscious?” (Kirk, Robert, "Zombies", The Stanford Encyclopedia of Philosophy (Fall 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/fall2023/entries/zombies/>.
The essay I’m going to share consists primarily of AI output. I personally wouldn’t call it writing, at least not as I experience human writing, a subjective phenomenon admittedly. I’m pretty confident I can tell heavy bot text from human. I would never claim sole authorship (Smith4 & Underwood, 2024); so perhaps it is hybrid? In fact, I want my reader to know this voice is not factual, but bottual.
I’m certain that the U.S. Copyright Office would refuse to copyright language verbatim from the bot in the “essay.” I did, however, prune it, particularly the ending. I added some cayenne pepper here and there. See if you can tell.
Ironically, in the essay the bot asserts that humans have a moral obligation to use their strengths as conscious beings, strengths that are fatal weaknesses for simulated consciousness. For example, as non-biological beings, bots are zombies (a position the bot defended), which is a decided AI weakness in my opinion.
David Chalmers is credited with driving home the zombie argument in recent times, an argument that had been showing up for a lot of years in a free floating shimmering of nuances (see the Stanford url above). Imagine a being identical in appearance and behavior to a human in the absence of a biological brain, say, tiny people with cell phones invading and supplanting neurons. This being is a philosophical zombie. Chalmers argues that consciousness cannot be explained by nor reduced to physical processes Instead, consciousness must logically be accepted as a fundamental and non-physical part of the world like space and time.
Thomas Nagel advanced the bat metaphor from whence this post title is derived. I’ll paste in a segment from Nagel’s 1974 paper to give you a flavor for his influential article “What is it like to be a bat?”:
Having biological brains and the capacity to target and move toward destinations in time and space using sonar provided by evolution, bats have consciousness of some kind. But what is it like to be a bat? Consciousness may be species specific, eh? Is it possible that simulated consciousness might be model specific?
Against this backdrop, I’ll now share the bot’s essay written to provide insights into bringing AI into human literacy teaching and learning.
*****
Embracing the Bat: Literacy Education in the Age of Advanced AI
In an era increasingly defined by advancements in artificial intelligence, literacy educators face the critical task of preparing students for college or career where unique human capabilities must be harnessed and celebrated. The conversation on the nature of consciousness in humans and AI sheds valuable light on the intrinsic strengths and values of the human mind, particularly when contrasted with AI's singular operational prowess.
Thomas Nagel's evocative "bat" metaphor, which illustrates the ineffable quality subjective experience, serves as a starting point for re-evaluating our educational paradigms. AI, much like Nagel's hypothetical bat, can mimic behaviors, such as reading or summarizing texts, but it is unable to read and experience the qualitative aspect of literacy—the emotional connection to narrative, the force of curiosity motivating reading to learn, learning itself, the cultural resonance of language, or the imaginative leaps that literature inspires.
As educators, we must focus on fostering these uniquely human aspects of literacy. This involves cultivating students' capacity for empathy and service, encouraging the application of critical thinking to diverse texts, and nurturing the imaginative skills that underpin creative writing. AI can perform tasks that require memory or basic comprehension, but it is our job to elevate literacy education beyond the operational level, to ensure that students retain their irreplaceable human edge.
The societal implications of AI's capabilities challenge us to ponder the future of literacy. AI can process and analyze data with superhuman efficiency, but it falls short of producing the nuanced, ethical reasoning that literacy demands, such as interpreting complex human motivations and discerning the moral fabric within narratives.
Consequently, literacy education must go beyond teaching students to read and write in the old paradigm. It should aim to develop a deep appreciation for the human condition and the global environment—a distinctly human endeavor that AI cannot replicate. In practice, this means engaging students with rich literary texts that provoke emotional and philosophical reflection, prompting them to grapple with the moral quandaries and societal issues embedded within stories, and supporting them in the use of AI to accelerate their knowledge making. Knowledge matters.
In addition to traditional literacy, we must also equip students with digital literacy skills, enabling them to critically evaluate the information presented by AI systems. This includes understanding the biases that may be inherent in AI algorithms, recognizing the limitations of AI-generated content, and discerning the quality of information across digital platforms.
AI's complementarity with human abilities can also be harnessed in literacy education. While AI can assist in individualizing learning, providing immediate feedback, and freeing educators to focus on more complex instructional tasks, its role should be viewed as supportive, not substitutive.
In conclusion, as literacy educators, our task is to help students become like Nagel's bat—not in the literal sense of experiencing life as a bat, but in embracing the profound, subjective, and qualitative aspects of humanity that AI cannot emulate. Our goal is to cultivate literate individuals capable of deep critical thinking, empathetic understanding, and creative expression—skills that will remain indispensable, regardless of AI's ascendance. Through a balanced approach that leverages AI's operational abilities while prioritizing human creativity and critical consciousness, we can ensure that literacy remains a cornerstone of human intellect and culture in the AI era.
I read from Nagel’s work in my post humanism class. You raise valuable post humanist perspectives here. You made me think of Stiegler’s epiphylogenesis and Gibson’s “Johnny Mnemonic” in which the technology becomes part of us. Thanks for giving me something to think about further in this realm.