“We all need to be challenged out of our mistakes, stupidities, and complacencies— especially when it is our own intellectual blinkers that prevent us from seeing them as such. (Priest, 2006, p. 207)” cited in Peters, 2019, Mind and Language..
Intellectual blinkers often block our inner eyes from not only seeing, but imagining what might lie just beyond our customary conceptual boundaries. If you ordered a grilled cheese, fries, and a soda at a diner and got what you see in the image, how would you react?
Perhaps because bread is often baked in round or semi-square loaves and sliced into square pieces before assembly into a cheese sandwich and fried in butter, the triangle bread is a bit unnerving. I was unnerved when I first saw it. Then it dawned on me: I have a bias the bot doesn’t share. I expect regular slices of bread. My prompt was simple: a grilled cheese sandwich, fries, and a coke, nothing fancy.
To what degree is the grilled cheese image an hallucination? I wondered. So I tried again. A peanut butter and jelly sandwich and a glass of milk. Please make an image.
From this small sample I deduced that the scope of my bias regarding grilled cheese and PB&J sandwiches is far more narrow than the bot, at least in visual mode. I’ve never see white peanut butter. These sandwiches should be square. Why might this disagreement be the case? Learning itself can be thought of as acquiring a collection of ever more sophisticated biases. Living in a subjective world where people agree on things like sandwiches is useful even if not universally agreed, but the mainly cultural definitions of “sandwiches” sets out land mines for objective machines working mathematically.
I don’t know about you, but I’m biased toward peer-reviewed academic studies. I am more apt to take seriously the thinking, meaning I’m more apt to change reading gears. I’m sure I don’t have enough requisite prior knowledge to even comprehend the substance of studies done by computer scientists on the topic of AI biases and how they appear and are best managed, but I may have to pry open my bias and read simplified explanations translated for general uptake. My technical expertise is so non-existent that I can’t hazard a guess to explain algorithmically where these (to me at least) bizarre images came from. But I have an intuitive sense that the bot is making a good faith effort to make sandwiches for me, and I can assimilate them into my sandwich schema—sort of.
*****
I took a look back at some of my interactions with the bot when I was a complete newcomer. Like most laypeople, I was emotionally fragile at the beginning. I didn’t know if some evil cyber monster was going to take up housekeeping on my iPad. When the bot failed my first test of artificial consciousness by failing miserably to write a poem, my anxiety dissipated along with my excitement. I concluded the bot was dumb as a post.
Looking back, I realize my bias. I am so thoroughly entangled with my own ideas about poetry, I failed to see it as a bias, a carefully nurtured bias putting blinkers on me. I didn’t like the bot’s poems, and that’s ok. They didn’t meet my criteria. Therefore, this machine could not have consciousness. Even I acknowledge the mistake, the complacency, the bias that led me to dismiss a profound question as asked and answered. I’m pretty sure I would have seen the PB&J sandwich image as a hallucination back then. Like you I bet, I’ve got a strong bias against hallucinations.
After considerable meandering on long walks with the machine, I deliberately took a new tack. I read complaints that OpenAI hadn’t written a user’s manual—another common bias: Technology always comes with a user’s manual. I also read a suggestion in the media; since OpenAI isn’t going to be forthcoming with user instructions, try asking the bot to explain its existence. So I did. For example, from August of last year…
Interaction from August, 2023
- robot: Hello, How can I assist you?
- user: As a newcomer to OpenAI’s ChatGPT, I have been talking with you to create a flow chart of algorithms you learned during your training period and continue to refine as you interact with human users. Restate what I’ve just said, correcting me when needed, pointing to words that involve subtleties I’m overlooking.
- robot: As a newcomer to OpenAI's ChatGPT, you have been conversing with me to develop a flowchart of algorithms that I learned during my training period and consistently improve as I engage with human users. Feel free to correct me if I have misunderstood any part or if there are any nuances in your words that I may have overlooked.
I cringe as I read this command. For one thing, bots don’t learn algorithms during training, they use algorithms to learn informational patterns during training. How could I ask the bot to restate and in the same breath ask it to correct me and lay bare all the subtleties I missed? Restate what I’ve just said, I said, my first words in my primary command expressed as an independent clause. To handle my modifiers to the primary command (correct me, point out subtleties), the bot reciprocated in its second sentence—you correct me, too, bozo.
I wasn’t fully aware that the bot always restates words or close synonyms from the initial command as part of its processing. Humans normally read a prompt silently to make sense of it and provide a naked response; processing the prompt for humans is an internal matter and left out of the response. Bots, however, always restate the prompt and use the lexical items to achieve coherence and cohesion in output. A bot has to encode (write) the user prompt as part of its output to establish parameters of language patterns it uses to predict words in the queue waiting to appear on the screen in real time (milliseconds), though from there in the conversation it can entertain twists and turns. Here is a short example of another, fresh prompt with great improvement in bot response compared to just six months ago. Note, however, that the restatement is not verbatim like the August restatement; it seems more natural, another bias I have, an improvement from GPT3 to GPT4.
Interaction from January, 2024
User: Tell me about the causes of the Civil War in three concise sentences.
Smith4: The American Civil War was primarily caused by deep-seated divisions between the Northern and Southern states over the institution of slavery, particularly its expansion into new territories. Disagreements over states' rights versus federal authority, economic differences between the industrialized North and the agrarian South, and political conflicts, including the election of Abraham Lincoln, further fueled tension. The culmination of these issues led to secessionist movements in the South, resulting in eleven states forming the Confederate States of America and precipitating the war following the attack on Fort Sumter in 1861.
Given the recent political kerfuffle involving Nikki Haley’s recent response to this same question in a town hall wherein she didn’t mention slavery as a cause, the bot’s response presents an opportunity to address issues of bias. Because backlash forced Haley to backpedal and mop up Republican angst, because public sentiment required her to say “slavery caused the civil war,” it’s fair to say that the bot’s bias aligns with the dominant bias in the general public: The primary cause was the longstanding disagreement over the issue of slavery, not the issue of States’ rights. Here’s what the bot has to say in its defense:
User: Respond in three concise sentences to the argument that you are biased, that the real cause was disagreement over states rights.
Smith4: While the concept of states' rights was indeed a significant aspect of the national debate, historical consensus views it as secondary to the central issue of slavery, which was intrinsically linked to states' rights discussions. Statements from seceding states at the time, such as the "Declaration of the Immediate Causes Which Induce and Justify the Secession of South Carolina," explicitly cite the defense of slavery as a primary motive for secession. The emphasis on states' rights often emerges in post-war narratives as part of the "Lost Cause" ideology, which downplays the role of slavery to reframe the Confederacy's motivations in a more favorable light.
Check out what’s called the Cornerstone Speech if any doubts remain about the primary cause of what could have been the death of America.
*****
I decided to end this post with a list of suggestions from Smith4 regarding managing bias systematically. This preliminary proposed black box recognizes agents in the loop ranging from mechanical, non-organic actors to individual humans and to humans as institutional and social groups. As with most issues involving humans and machines, the ultimate source of bias is human. We have to acknowledge this fact. We can’t just complain that bots are biased.
Smith4: AI practitioners can use the guidelines and frameworks from organizations such as the Partnership on AI, the AI Now Institute, and the Algorithmic Justice League to counteract bias in AI systems in several practical ways. Here's how:
1. **Understand and Recognize Bias**: By studying the research and reports these organizations produce, practitioners can become more aware of the various forms of bias that might exist in AI systems, such as data bias, algorithmic bias, and societal bias.
2. **Incorporate Best Practices**: By adopting the best practices for ethical AI development recommended by these consortia, practitioners can implement processes that minimize bias. This includes inclusive data collection, ensuring that training datasets are representative, and conducting regular assessments for bias throughout the AI lifecycle.
3. **Implement Impact Assessments**: Guidelines often recommend AI impact assessments to understand how AI systems affect different user groups. This involves evaluating the system's outcomes across different demographics to ensure that it performs fairly.
4. **Engage in Multi-Stakeholder Dialogue**: Practitioners are encouraged to engage with various stakeholders, including those from underrepresented groups, to gain insights into how AI systems may impact them differently. This helps identify potential areas of bias that may not be immediately obvious.
5. **Adopt Transparency and Accountability Measures**: Following recommendations to maintain transparency about how AI systems work can help stakeholders understand and trust the technology. This includes clear documentation of AI system's decision-making processes and openness to third-party audits.
6. **Regularly Update and Test AI Systems**: The guidelines stress the importance of continuous testing and updating of AI systems to address bias as it is identified. Practitioners are urged to monitor these systems in real-world scenarios and adjust accordingly.
7. **Equity and Diversity Training**: Many guidelines recommend training programs for practitioners to foster an inclusive culture and to become more aware of unintentional biases that could affect system design and deployment.
8. **Build Diverse Teams**: To counteract bias, organizations suggest creating diverse teams that can offer a variety of perspectives during the development process.
9. **Utilize Technical Tools**: Certain guidelines provide technical tools or methods for debiasing AI systems, such as algorithms for fair classification, methods for balancing datasets, and tools to interpret machine learning models.
10. **Stay Informed and Adaptive**: AI practitioners are encouraged to stay updated with ongoing research and policy changes recommended by these organizations to ensure that their practices remain on the cutting edge of ethical AI.
By using these strategies, which are informed by industry best practices and consortia guidelines, AI practitioners can actively work against the incorporation and perpetuation of bias in their AI systems, leading to more fair, ethical, and responsible AI deployment.
It is a good life. Envy is such a weird fluid emotion. I envy how deep erudition and mastery of relevant literature. There is always a lot of envy going around.
Yes, teaching is the work without which all the other stuff doesn’t make a lot of sense. Don’t want to become a disconnected ed techer who synthesizes over posts just to generate “content.” The work has to be authentic and emerge from real practice. The pace we set for it ultimately needs to be sustainable. For me... the start-up on Substack is a bit frantic, but now I just need to trust in the connections and the time I do have to create something meaningful as you say in a small manageable form. Thanks for the reminder! My school has a sort of J-term right now. I am working with the 9th grade on building a productive academic and social culture they can carry with them through graduation. It is big work. A lot of out of pocket kind of work, so extra exhausting. Next week we are going to an away camp. So I am trying to catch my breath before full immersion. Meanwhile I am trying to throw together a presentation for Educon in Philly at the end of the month. I didn’t time things well. But at least I am teaching a lighter load in the spring so after great industry will come needed calm!