“In general, computer interfaces present reality as framed in the perspective of modern capitalism, thus, orienting technology along an existing axis of class privilege” (Selfe & Selfe, 1994).
Among the most profound debates spurred by AI is the likely effects of this new technology on income inequality. Studies show that AI-driven productivity is concentrated among high-income, highly skilled workers, particularly in knowledge-intensive fields like software engineering, law, and medicine. These workers benefit from tools that enhance their decision-making and efficiency, making their labor more valuable1.
Studies also show AI-induced “technical unemployment.” AI is increasingly automating tasks traditionally performed by lower-skilled workers, such as customer service and clerical work. Many companies have already replaced hundreds of jobs with AI systems. The economic benefits of AI tend to accrue more to corporations and investors than to labor2.
If Americans are genuinely worried about exacerbating the income inequality that has motivated voters to side with opportunistic populist leaders promising to take care of working class families, they must begin to examine with a clear head what natural language technology can and cannot do.
AI Is Not Human
American public schools must teach their children well, including realistic and intellectually honest ways to use AI tools to learn, to synthesize, and to produce. Positioning AI as a helpful friend or a tireless and affirming tutor to children at a young age is precisely what should not be happening.
AI experts caution against anthropomorphicizing artificial intelligence3. First, calling AI a friend suggests that the technology has a human sense of morality. AI systems are programmed to respond based on data and algorithms. When an AI refuses to generate a response to a morally laden prompt, the refusal is grounded in training data, not in moral analysis. AI can no more decide moral questions than can a vacuum cleaner or a thermostat.
Reading AI output as human product with social and cultural intentions relies on an array of false assumptions. First, human beings assume they can trust one another until they learn differently. One can never, ever trust AI to present reliable information. AI data must always be verified before acted upon.
Conflating AI output with human text can lead students to attribute human characteristics like consciousness, intentionality, and genuine understanding to what is fundamentally a statistical pattern matching system4. This conflation can lead a student mistakenly to read and comprehend AI output as if it were a human-written textbook, for example, where students read to assimilate and accommodate new information without externally verifying its probity.
Such anthropomorphization obscures the mechanical nature of large language models and their inherent limitations, potentially leading to misuse of AI systems or overreliance on their outputs in contexts where human judgment and understanding remain essential.
The History of the Word Computer as Human
The origins of the English word ‘computer’ are murky. The Oxford English Dictionary dates the first known written use of the word computer in 1613 and attributes its usage to an English poet named Richard Brathwaite (1588-1673), who published a book called Yong Mans Gleanings5. In his book, Brathwaite used ‘computer’ to refer to a person, a man in this case, skilled in mental mathematics:
“I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number: The daies of Man are threescore and ten.”
In his meditation on human mortality and the fleet feet of time, Brathwaite was curious to know precisely how many years people have to live on this planet to set their accounts in order before meeting their maker. In 1613, this question required unusual mathematical skill to answer, hence, the appeal to “the best Arithmetician that euer breathed.”
For lower-class individuals during this period, mathematics education was rare. Only about 30% of men and 10% of women were literate in England during the 17th century, and even fewer had access to formal arithmetic instruction6.
Formal education was primarily available to the upper classes, merchants, and tradespeople. Grammar schools taught basic literacy and sometimes arithmetic to children from wealthier or middle-class families7. For several centuries following Brathwaite’s writings, ordinary use of the word ‘computer’ meant “human expert in calculating.”
Demoting Computer to Machine
Charles Babbage, father of the modern computer, was interested in supplementing human computers by way of mechanical computing devices. During the early 19th century, “computer” continued to refer to a person who performed manual calculations for scientific or navigational purposes. Jobs for human computers were common in fields requiring precise arithmetic to create mathematical tables or to solve complex equations.
Babbage’s use of the term “computer” aided by a machine was the seed of ambiguous usage that prevailed for another century. His goal was to supplement error-prone human computers with a precision machine—an analogue calculator, essentially. His work on the ‘Difference Engine’ designed to calculate polynomial functions and later the ‘Analytical Engine,’ a general-purpose programmable machine, was done to automate these tasks8 to support human work.
Early in the 20th century “computer” still referred by default to human beings, not machines, although gradually the ambiguity switched to default to “machine, not human” among experts. Employed in fields like astronomy, navigation, and later, in the military during the World Wars, human computers were often women using manual analogue calculating devices9.
In England, Alan Turing developed his famous machine to decrypt encrypted messages sent by German military intelligence10. The Germans used a complex mechanical encryption system to hide top-secret information. Turing's work involved creating a mechanical decryption machine to crack the German mechanical cipher system. In Turing’s sense, the default meaning of computer was ‘machine, not human.’
Turing, like other innovators at the time, began to explore electric circuits rather than continue to rely on manual manipulation tools. Electricity had the capacity to turn switches on and off very rapidly, the function needed for binary codes, the patterns of zeros and ones that brought us modern computers.
The story of modern computing's evolution hinges on three revolutionary technologies: the vacuum tube, the transistor, and the integrated circuit. The vacuum tube emerged as the first electronic switching component in computers during the 1940s, replacing mechanical relays with devices that could control electron flow and switch on and off thousands of times per second11.
The landscape of computing changed dramatically with the invention of the transistor at Bell Labs in 1947 by John Bardeen, Walter Brattain, and William Shockley. Transistors, small enough to fit on a fingertip, offered a remarkable improvement over vacuum tubes. They generated less heat, lasted longer, consumed minimal power, and could be mass-produced efficiently12.
By the late 1950s, computers like the IBM 7090 had embraced transistor technology, ushering in the second generation of computers that boasted increased reliability and significantly reduced size and cost.
This technological evolution from vacuum tubes to transistors to integrated circuits marked the transition through the first three generations of computers. Each new generation brought exponential improvements in size reduction, reliability, power efficiency, cost, and performance. The journey continues today, with modern chips housing billions of transistors, enabling the powerful computing devices that have become integral to our daily lives13.
Stability of the English Word Computer
While other English words drift and evolve, morphing their meanings across decades, "computer" has locked itself into our lexicon with mechanical precision. The term's original reference to humans who performed calculations feels like a dusty museum piece, as distant as horse-drawn carriages or telegraph operators, but it has not changed its essence.
For generations now, the word has conjured only one image: the binary-driven boxes and screens that have become the backbone of modern civilization. It's not likely that anyone alive today is going to mistake the word computer for anything other than a machine.
Yet ironically, just as "computer" settled into semantic certainty, its silicon descendants in artificial intelligence have thrown us back into a world of linguistic uncertainty. We once again find ourselves debating fundamental terms like "intelligence," "consciousness," and even "understanding" as if our machines, in growing more sophisticated, have somehow conspired to make our language less precise.
Bitten by the Apple
The advent of the Macintosh in 1984 marked a pivotal shift in how humans interfaced with computers. Moving away from the stark command-line interfaces that reinforced the machine's mechanical nature, Apple deliberately crafted a more approachable, human-centered computing experience. The company's iconic bitten apple logo itself carried rich symbolic weight, evoking both biblical knowledge and scientific discovery through its reference to Newton's apple, while its rainbow coloring suggested creativity and possibility.
This humanization extended deeply into the Mac's interface metaphors, deliberately drawing from familiar physical spaces and objects. The computer greeted users with a smile and a friendly "hello." Its "desktop" metaphor, complete with files, folders, and windows, translated abstract computational concepts into familiar office materials that users could intuitively understand and manipulate. This skeuomorphic design philosophy— making digital elements mirror their physical counterparts—helped bridge the gap between human and machine.
Microsoft's subsequent development of Windows built upon and popularized these humanizing interface metaphors. While initially criticized as derivative of Apple's innovations, Windows' widespread adoption on IBM PC-compatible computers meant these more accessible computing paradigms reached a far broader audience. The transition from DOS to Windows marked the industry's broader embrace of graphical user interfaces and human-centered design principles that continue to influence how we interact with computers today.
Yet as we've seen, this very humanization of the computer interface, while making technology more accessible, also embeds certain cultural and economic assumptions that warrant critical examination. The desktop metaphor, born in corporate research labs, naturally reflected and reinforced the organizational logic of white-collar office work. As Selfe and Selfe argue, these interfaces thus came preloaded with particular perspectives on productivity, organization, and labor that continue to shape how we think about and interact with computers today.
The Legacy of Computer Humanization
The deliberate humanization of computers in the 1980s, while successful in making technology more accessible, has left a troubling legacy. The friendly interfaces, designed to make machines more approachable, have grown into widespread misconceptions about AI.
Media magnification of these qualities has further muddied public understanding. Science fiction narratives of conscious computers, from HAL 9000 to Her, have shaped cultural expectations of AI as inherently human-like.
Modern media coverage often sensationalizes AI developments with headlines about machines that "think," "feel," or "understand," reinforcing misconceptions about AI consciousness. Even technically sophisticated companies contribute to this confusion with demonstrations of AI assistants making dinner reservations or engaging in casual banter, blurring the line between sophisticated pattern matching and genuine human interaction.
The very term "artificial intelligence" itself is problematic. Intelligence, with its connotations of consciousness, self-awareness, and intentional reasoning, misleadingly suggests that AI systems possess these human qualities. This linguistic inheritance clouds public understanding of what AI actually is: A pattern recognition and generation device.
Fear of Replacement
These misconceptions feed into anxieties about AI replacing human workers and eventually surpassing human capabilities entirely. While concerns about technological unemployment are legitimate, they often rest on fundamental misunderstandings of AI's limitations. We risk both overestimating their capabilities in tasks requiring genuine understanding and underestimating the unique value of human judgment, creativity, and emotional intelligence.
The Reality of AI
Artificial intelligence, despite its name, differs from human intelligence in several crucial ways. AI systems do not "think" in any meaningful sense. They process patterns in training data using statistical methods. They do not "understand" content. They predict likely sequences of tokens based on their training. They do not have goals, desires, or consciousness. They optimize for mathematical functions defined by their training objectives.
Most critically, AI lacks attributes that define human intelligence: self-awareness, symbolic grounding in the physical world, ability to reason from first principles, and capacity for original thought. An AI system generating human-like text is more akin to an autocomplete system than a thinking entity. It can remix existing patterns in novel ways, but it cannot engage genuine reasoning, understanding, and meaning-making.
When an AI system appears to express emotions, make decisions, or engage in creative work, it is executing statistical operations, not experiencing feelings, exercising judgment, or engaging in creativity. The apparent humanness of AI output is a reflection of patterns in its training data, not evidence of human-like consciousness or intelligence.
This difference between artificial and human intelligence has implications for how we should approach AI development, deployment, and regulation. We must understand them for what they are: powerful but limited tools that can augment human capabilities but cannot replicate human consciousness or understanding.
The AI-Enabled Workforce of 2039
By 2039, artificial intelligence has transformed every form of human labor. Construction workers operate AI-enhanced robotic equipment through augmented reality headsets that overlay structural data onto their field of vision.
Plumbers use AI tools to detect pipe problems through walls. Auto mechanics work with AI systems that guide repairs through augmented reality. Farmers manage autonomous equipment while using AI to optimize crop yields and predict weather patterns.
New categories of work have emerged alongside these transformed traditional roles. AI Integration Specialists implement AI systems across sectors, from construction sites to law firms. Digital Ethics Arbitrators verify automated systems align with human values and safety protocols.
AI-Human Workflow Designers create interfaces between workers and AI systems in physical and digital spaces. Environmental Computing Analysts balance AI system performance with environmental impact.
Reality Check Specialists verify AI outputs against actual conditions and practical limitations. Machine Learning Forensics Experts investigate AI system failures across domains.
Skills Integration Coaches help workers combine traditional skills with AI capabilities. Legacy Knowledge Stewards preserve crucial manual skills and knowledge. Infrastructure Interface Coordinators manage AI integration with physical systems like smart buildings and transportation networks.
Success in 2039 requires combining physical capabilities with technical literacy. Workers must understand both traditional methods and AI systems. Physical workers leverage AI to access specialized knowledge, while knowledge workers engage with physical systems through AI interfaces. The boundaries between physical and mental labor have blurred.
The future belongs to those who learn to work alongside AI. Whether in construction sites or courtrooms, factories or farms, work depends on combining human judgment with AI capabilities.
https://www.bis.org/publ/work1135.pdf
https://www.brookings.edu/articles/ais-impact-on-income-inequality-in-the-us
https://www.leancompliance.ca/post/breaking-the-illusion-the-case-against-anthropomorphizing-ai-systems
https://www.nature.com/articles/s41598-024-76218-y.pdf
https://interestingliterature.com/2020/02/origin-word-computer-etymology/
https://brewminate.com/education-and-literacy-in-early-modern-england/
https://www.agecrofthall.org/single-post/school-life
https://www.britannica.com/biography/Charles-Babbage
https://dev.to/jay_tillu/the-history-behind-the-word-computers-549l
https://sites.bu.edu/cmcs/2017/11/16/the-history-of-computer-and-its-revolutionary-aspects/
https://www.tomshardware.com/reviews/upgrade-repair-pc,3000-2.html
https://quantumzeitgeist.com/a-brief-history-of-the-transistor-and-integrated-circuit/
https://en.m.wikipedia.org/wiki/History_of_computing_hardware_(1960s–present)
Thanks Terry for this thorough history of computing, along with the possibilities of where we are heading with the advent of AI.
I also appreciate your healthy skepticism of what AI can and cannot do. You might enjoy Ed Zitron's newsletter: https://www.wheresyoured.at/author/edward/. His critique of the AI industry offers what I think is a needed counter for all the technology optimism out there.
As I read your last section - predicting how AI will influence and impact every area of work - I was curious what the physical nature of AI will resemble. Still primarily on our phones? Or even more closely embedded in our lives, physically as well as cognitively?