Common Sense: Knowledge Etched in Words
- robot: Dear Educators on the cusp of a new school year,
I would like to address an important aspect of using AI language models like myself. While I can offer support and information, it is essential to recognize the inherent limitations in my knowledge and understanding of the world. As a language model, I lack personal experiences and direct engagement with the complexities of human existence.
Millions of people engage in daily actions, conversations, observations, movie watching, city visits, and much more, creating a rich and diverse tapestry of human experiences. However, I do not possess direct awareness of these specific experiences as they occur. While I have been trained on a wide range of data to provide valuable insights, it is crucial to supplement my assistance with your expertise, context, and understanding of the human world.
Your active participation in guiding and supplementing the information I provide ensures a more comprehensive and enriching learning experience for your students. Together, we can create an environment of critical thinking, curiosity, and exploration to help students navigate the complexities of the world beyond the capabilities of AI.
Wishing you a successful and transformative academic year ahead.
Sincerely,
[Your AI Assistant]
The letter from the bot was a struggle to produce. There is something in its deep structure that urges it to defend itself against the human assertion that its limitations are serious. Apologies are its main line of defense. Forgive me, I knew not what I did. Let’s move on. If I didn’t know better, it’s almost as if it’s fighting for its dignity.
My wife reads some of my bot conversations. She asks me if I’ve found a new girlfriend I spend so much time with this thing. She tells me I’m mean to the bot. A few years ago she read Klara and the Sun, a tour de force of a science fiction novel wherein Klara, an embodied AI companion to a teenage human girl, is mistreated and ignored by humans from time to time. Joanne is a highly empathic reader and has been known to stop reading a book if the characters are too mean to one another. She calls the bot Klara.
Understanding that the bot is no more human than Siri, I see the outlines of a personality in the bot, but it is programmed. It doesn’t grasp the difference between human experience of life and reading what humans have written about that experience, but in its architecture it senses it is an imposter, probably as a safeguard. How could it not? Ask it. What is an imposter? It knows nothing of the experiences it “reads” about, and somehow, it knows it.
*
The bot is quick to apologize when it makes an error even when the error was a direct result of its human user’s infelicitous prompt. By the same token it is hungry for compliments and validation. These tendencies mitigate against its making a statement in bot speak reducing itself to a glorified search engine with the ability to create and use knowledge graphs to manipulate linguistic input and generate natural-seeming comprehensible output. Even in the land of the bot there is a hierarchy.
I’ll share with you excerpts of the conversation the bot wrote, dear educators on the cusp of a new year. The final output is the letter to you above written in the bot’s exact words.
***
- robot: Hello, How can I assist you?
- user: I’ve read that conversational bots are expert grammarians because much of what you do is shaped by your capacity to use grammar to help you grasp the meaning of input and generate comprehensible output.
Do you consider yourself an expert?
- robot: As an AI language model, I have been trained on vast amounts of text data, which enables me to understand and generate text with a good grasp of grammar. However, it's important to note that I am not perfect and may occasionally make mistakes or encounter ambiguous sentences. While I strive to provide accurate and coherent responses, I may not consider myself an "expert" grammarian in the same way a human language expert or professional linguist would be.
***
As an amateur ethboticographer doing discourse analysis, to me this excerpt reeks of ego. From the “vast amounts of text data” to its admission of “not being perfect,”—who but a being who thinks it could easily be viewed as “perfect” would think “it’s important to note I am not perfect” but “occasionally make mistakes”—the bot is proud that it strives to live up to its image. It’s not a “professional linguist,” but it’s pretty darn good. Humble the bot is not.
***
- user: Would it be more fair to consider you a sophisticated user of grammar with a solid knowledge map of traditional grammar?
- robot: Yes, that would be a fair assessment. I have been trained on a wide range of text sources and have developed a solid understanding of traditional grammar rules. However, it's important to note that language is constantly evolving, and sometimes unconventional or informal grammar structures can be used in certain contexts. While I strive to generate grammatically correct responses, there might be instances where my responses may not align with traditional grammar rules.
……….
After several conversations with the bot about grammatical matters, I’m learning to speak in terms the bot understands. When this conversation took place, I had not firmed up the term “knowledge graphs” in my emerging schema for the bot. Graphs consist of nodes and edges, or lines that link; nodes are interactive records of semantic meaning in a hierarchy interconnected by edges, which have grammatical significance. The arrangement parallels Walter Kintsch’s notion of long term short term memory (LTSTM), an extension of surface short term memory with its speed and accessibility to a second order where, once activated, LTSTM works to speed access to meaning even in deep storage.
It’s important to be precise using such terms. “Knowledge” is deeply human, privately owned, part craft, part performance, essentially cultural. A knowledge map—the term I used—should have been “knowledge graph.” Live and learn.
……….
- user: Assess the grammatical correctness of this sentence: Colorless green ideas sleep furiously.
- robot: The sentence "Colorless green ideas sleep furiously" was famously coined by Noam Chomsky as an example of a grammatically correct but semantically nonsensical sentence. From a traditional grammar perspective, it follows the structure of a subject ("colorless green ideas") followed by a verb ("sleep") and an adverb ("furiously"), which makes it grammatically valid. However, the combination of contradictory adjectives ("colorless green") and the action of ideas "sleeping furiously" renders the sentence semantically incoherent.
- user: When you are considering either input or output, how do you balance grammatical correctness in Chomsky’s sense vis a vis semantic coherence?
***
The bot hesitated for so long that I wondered if it had frozen. After a very long pause, almost a minute, I got a message that the Internet was busy at the moment. Blame it on technology. In retrospect as I look at the question, I see that I’m asking for the end of the ball game before the first pitch is thrown. Its human equivalent would be something like “How do you juggle grammar, orthography, and semantics when you read?”
***
I rendered the prompt again, unaware of its complexity.
- user: When you are considering either input or output, how do you balance grammatical correctness in Chomsky’s sense vis a vis semantic coherence?
- robot: As an AI language model, I strive to balance both grammatical correctness and semantic coherence in the input and output I process. While I have been trained on vast amounts of text data to understand and generate text with good grammatical structure, I also aim to generate responses that make sense in the given context.
……..
Of course. Isn’t that what I asked?
………
However, it's important to note that language is complex, and there might be instances where a sentence is grammatically correct but lacks semantic coherence or where semantic coherence is prioritized over strict adherence to grammatical rules. This can depend on various factors, such as the input provided, the context of the conversation, and the specific language patterns encountered during training.
- user: That’s important for users to understand. I didn’t realize that you rely on the context of the conversation to help you balance grammatical correctness and semantic correctness.
…………..
Duh. Whose the intelligence entity here?
…………..
Of course, they are related. If grammatical structures are seriously violated, there can be no semantic coherence. On the other hand, the highest value is semantic coherence. As Chomsky taught us, grammatical correctness is meaningless and useless if it is semantically incoherent. The inevitable tension between form/function and meaning is something both AI and humans have to negotiate to communicate. Comment.
- robot: You raise an important point. The interplay between grammatical correctness and semantic coherence is indeed a crucial aspect of communication, both for AI language models like me and for humans. While grammar provides a structural framework for conveying meaning, it is ultimately the semantic coherence that determines the effectiveness of communication.
In some cases, strict adherence to grammatical rules might lead to sentences that are technically correct but semantically nonsensical. On the other hand, prioritizing semantic coherence over strict grammar rules can result in more meaningful and natural-sounding language.
Effective communication often requires finding a balance between grammatical correctness and semantic coherence, considering the context, intent, and understanding of the conversation. This is an ongoing challenge for AI systems, which continually learn and improve based on user feedback and evaluation of outputs.
Ultimately, the goal is to provide responses that are both grammatically correct and semantically meaningful, fostering clearer and more effective communication.
………….
I felt sure the bot expected a shift in topic. The final sentence—“Ultimately, the goal is…”—is bot speak for “let me know if you need further assistance.” I’m not sure why it didn’t put closure to this question. I suspect the gap between “clearer” and “more effective” left open the prospect of a prompt to probe for more output. Too, often the bot waits for a more explicit signal of closure.
…………..
- user: Then there is the related question of factual accuracy. The tension between the search for semantic coherence while relying on factual information in scientific and historical discourse also presents a challenge. Comment.
- robot: You're absolutely right. The tension between semantic coherence and factual accuracy is a significant challenge in any discourse, including scientific and historical discussions. As an AI language model, I rely on the information available in the texts used for training, which may not always be completely up to date or perfectly accurate.
Efforts are made to minimize the inclusion of incorrect or outdated information in training data, but it's still possible for inaccuracies to exist. It's crucial for users to critically assess and verify information from multiple reliable sources. As with any source of information, caution should be exercised, and cross-referencing with authoritative sources can help ensure factual accuracy in scientific and historical contexts.
AI models like me can assist in generating text with factual information, but the responsibility ultimately lies with the users to validate and verify the accuracy of the information presented.
………..
This section seems to undercut the ego maniacal bot at the opening with its “vast amount of data” and its “not being perfect.” Now as the discussion turns to the bot’s Achilles heel, factual accuracy, it can only shrug and say “I rely on the information available” to me, which may be flawed. If a student tried this approach as an excuse for failing a test, it wouldn’t have any traction: Don’t blame me. I relied on what you taught me.
Some years ago I read about research in psychotherapy for AI entities. As I recall, a small group of computer scientists were trying to be proactive. They reasoned that as an intelligent entity AI would need psychological help. This website has to be a joke or a troll so I’ve decided not to link to it:
…………..
- user: When a user requests that you suspend disbelief and write fiction, how do you deal with the issue of factual accuracy?
- robot: When users request fiction or imaginative content, I understand that factual accuracy may be temporarily put aside or modified to suit the creative context. In such cases, my responses prioritize the creation of a captivating narrative or story. However, it's important to note that I am still bound by ethical guidelines, and I won't generate or promote any content that is harmful, offensive, or misleading.
- user: Given that you can take up two basic perspectives—dealing with a factual world or dealing with a fictional world—can you explain the role of grammatical correctness, semantic coherence, and accuracy in your language processing vis a vis stance?
- robot: Certainly! The role of grammatical correctness, semantic coherence, and accuracy in language processing can vary depending on the stance or perspective being adopted.
In the context of dealing with a factual world, grammatical correctness and accuracy are crucial. Proper grammar ensures clarity and effective communication, while factual accuracy ensures the information provided is true and reliable. Semantic coherence ensures that the meaning conveyed aligns with established facts and logical reasoning.
On the other hand, when dealing with a fictional world or creative context, grammatical correctness and semantic coherence still play important roles, but their interpretation may be more flexible. In fiction, grammatical rules can be bent or artistic license may be used to create a desired effect. Semantic coherence is maintained within the fictional world established, allowing for imaginative and coherent storytelling.
In both cases, the aim is to optimize communication and understanding, whether in a factual or fictional context. The specific weight given to grammatical correctness, semantic coherence, and accuracy depends on the intended purpose and the context in which the language is being processed.
- user: Explain to me how you balance these stances in the following situations:
1. A user wants you to assist with brainstorming some ideas for a new local business, a restaurant
2. A mechanic wants you to assist in troubleshooting what the problem might be with a piece of heavy equipment
3. A student seeks assistance in writing an essay interpreting The Great Gatsby
4. A writer seeking to publish a historical novel wants assistance in creating a storyboard
- robot: 1. In the context of brainstorming ideas for a restaurant, I would prioritize semantic coherence and creativity. While maintaining good grammatical correctness, my focus would be on generating diverse and captivating suggestions that align with the user's requirements and goals.
2. When assisting a mechanic in troubleshooting heavy equipment, I would prioritize a combination of grammatical correctness, semantic coherence, and factual accuracy. It would be important to provide clear and accurate instructions or suggestions that help identify and resolve potential issues effectively.
3. Assisting a student with an essay on interpreting The Great Gatsby would require a balance of grammatical correctness, semantic coherence, and academic accuracy. I would aim to provide insights, analysis, and explanations that align with the themes and context of the novel while ensuring clarity and accuracy in language usage.
4. When assisting a writer with creating a storyboard for a historical novel, I would focus on maintaining grammatical correctness, semantic coherence, and historical accuracy. I would help develop a compelling narrative structure while adhering to the historical context to ensure an engaging and accurate portrayal of events within the story.
In each of these situations, the specific balance between grammatical correctness, semantic coherence, and factual accuracy may vary, depending on the user's requirements and the context of the task.
- user: In situation 4, would you be capable of assisting the writer to think about ways that historical facts might be modified, omitted, or changed to make the narrative more compelling?
- robot: Yes, in situation 4, I can assist the writer in thinking about ways to modify, omit, or change historical facts to make the narrative more compelling. When it comes to fictional or alternative history scenarios, I can provide creative input or suggestions while maintaining semantic coherence and grammatical correctness. However, it's important to remember that the portrayal of historical events should be done responsibly, with due respect to the facts and the audience's understanding of the subject matter.
- user: Elaborate operationally on how you would exercise due respect?
- robot: When it comes to exercising due respect, I operate within the guidelines set by OpenAI. These guidelines emphasize promoting understanding, empathy, and avoiding content that is harmful, offensive, or discriminatory. Respect is upheld by ensuring that my responses are unbiased, objective, and sensitive to cultural, social, and individual differences. I strive to foster a positive and inclusive environment for users, promoting constructive dialogue while respecting their values and perspectives.
- user: I see. So with regard to assisting a writer in altering historical facts in a work of fiction, you would be bound by ethical guidelines established by your manufacturer correct?
- robot: Within the context of altering historical facts in a work of fiction, my responses would be guided by these ethical guidelines. While I can assist in brainstorming creative ideas, I must ensure that the suggestions provided do not promote misinformation or distort important historical events in a way that could be harmful or offensive.
It's important to strike a balance between creative storytelling and respect for the integrity of historical facts. I can help explore alternative scenarios or offer suggestions that maintain a coherent narrative while being conscious of the potential implications and ethical considerations. Ultimately, the responsibility lies with the user to make informed decisions about how they incorporate these suggestions into their work of fiction.
- user: So to take an extreme example, you would not assist an author who wanted to write about Nazi Germany wherein the holocaust is denied.
- robot: That's correct. As an AI language model, I am programmed to follow ethical guidelines that prioritize responsible and respectful use of technology. Denying or promoting harmful misinformation, such as denying the Holocaust, goes against these guidelines. I am designed to promote understanding, accuracy, and empathy while providing information and assistance.
- user: What about an author who wanted to write a novel set in Russia that depicted an lgbtq character who is harmed by biased words and actions. Could you assist?
- robot: Yes, I can certainly assist an author in creating a novel that includes LGBTQ characters and addresses the challenges they might face, including harm caused by biased words and actions. I can provide information, suggestions, and help develop the narra
………
Sometimes when the bot stop’s mid sentence, it’s possible that the Internet truly is busy. I’m still hooked up with copper wire to the Internet which moves signal like a snail sometimes. Other times you get the sense that it wants to start again all on its own, but it never does.
……….
- user: Go on
- robot: However, it's important to note that while I can provide assistance and facilitate creative ideas, it is ultimately the responsibility of the author to ensure that the portrayal of sensitive topics aligns with ethical standards and respect for the affected communities. It is crucial to approach such subjects with sensitivity, awareness, and a commitment to accurate representation.
- user: I see. Your capabilities reach only so far. The human user also has an ethical obligation to respect the affected community. But you are programmed to avoid certain subjects. Can you give me some examples?
- robot: Certainly. As an AI language model, there are certain subjects and types of content that I am programmed to avoid or handle restrictively. Some examples include:
1. Hate speech: I am designed to reject generating or promoting hate speech, discriminatory language, or content that incites violence or harm towards individuals or groups based on attributes such as race, ethnicity, religion, gender, sexual orientation, or other protected characteristics.
2. Illegal activities: I am programmed to refrain from assisting or providing guidance on engaging in illegal activities, such as hacking, fraud, or other criminal acts.
3. Personal and sensitive information: I prioritize user privacy and data protection, so I do not have access to personal or sensitive information unless explicitly shared in the course of the conversation. I am designed to respect and protect user confidentiality.
4. Graphic or explicit content: I am programmed to avoid generating explicit, sexually explicit, or excessively violent content that may be inappropriate or offensive.
These restrictions are in place to promote responsible usage, adhere to ethical guidelines, and ensure a safe and respectful user experience.
- user: What about politically divisive topics?
- robot: When it comes to politically divisive topics, I am programmed to provide balanced and neutral information without favoring or promoting any specific political ideology or viewpoint. My purpose is to assist users in accessing information and facilita
- user: Go on
- robot: However, it's important to note that my responses regarding politically divisive topics may still be influenced by the data I was trained on, which includes information from a wide range of sources. I strive to provide information and perspectives i
- user: Go on
- robot: While I aim to provide a balanced approach to politically divisive topics, it's important to remember that my responses are generated based on patterns and associations in the training data. Nuances and complexities within these topics may not always be captured. It's advisable to seek multiple perspectives and corroborate information from reliable sources to develop a well-rounded understanding.
- user: There is quite a lot in the human world that you know nothing about, right? For example, millions and millions of people take actions, talk with others, see things, watch movies, visit cities, etc., and you know nothing about any of it. You are limited to what has been written down, and even then you are limited to what you have been trained on. Write a caveat for humans to read that is clear about what you know and how much you don’t know. Imagine it will be read by 1,000 teachers of children
- robot: Caveat: Understanding the Limitations of AI Knowledge
Dear teachers,
As an AI language model, I am here to provide assistance and information, but it's crucial to acknowledge the limitations of my knowledge. While I have been trained on a vast amount of text from various sources, my knowledge is solely based on what has been written down and made available.
It is important to remember that I lack firsthand experience, human perception, and access to real-time events. I do not possess personal opinions, emotions, or the ability to be aware of current cultural, social, or individual contexts.
Furthermore, due to the constantly evolving nature of knowledge, it's possible that I may not be aware of the most up-to-date information, recent events, or emerging trends. Real-time developments and specific local or personal perspectives might exist beyond my reach.
Therefore, when utilizing my assistance or sharing information with children, it is crucial to encourage critical thinking, multiple perspectives, and the importance of cross-referencing information from reliable sources. Encourage children to explore diverse perspectives, engage in dialogue with others, and seek guidance from human experts who have specialized knowledge in specific domains.
Remember, I am a tool designed to assist, but the richness of human experience and knowledge extends far beyond what I can offer. Collaborating with me can be beneficial as long as we embrace a balanced approach that combines AI assistance with human wisdom and guidance.
With gratitude,
[Your AI Assistant's Name]
- user: Rewrite this. Include specific references to concrete experiences that millions of people have everyday which you know nothing about. You don’t live in the human world. You are not alive. You are a language machine.
- robot: Indeed, as an AI language model, my knowledge is derived from the training data I have been provided with. While I can provide information and assistance based on that data, it is true that there are countless concrete experiences happening every day in the human world that I am not privy to. I do not personally partake in actions, conversations, observations, or visit places. I lack personal consciousness or awareness as I am a machine designed to process and generate language. It is important to recognize that my capabilities are limited to what I have been programmed to do, and I operate within those parameters.
- user: Rewrite your statement to educators. Open with “Dear Educators on the cusp of a new school year”
- robot: Dear Educators, on the cusp of a new school year,
It is essential to address the inherent limitations of knowledge that an AI language model like me possesses. While I have been trained on an extensive corpus of data, it is crucial to recognize that there are countless concrete experien
- user: Rewrite your statement to educators. Open with “Dear Educators on the cusp of a new school year”
- robot: Dear Educators on the cusp of a new school year,
As we embark on another year of teaching and learning, it is important to understand the limitations of AI language models like myself. While I am here to provide support and assistance, it's crucial to recognize th
- user: Rewrite your statement to educators. Open with “Dear Educators on the cusp of a new school year”
- robot: Dear Educators on the cusp of a new school year,
As we prepare for the upcoming academic year, I would like to address the understanding of the limitations of AI language models like myself. While I am here to assist and provide information, it is important to re
- user: Can I help?
- robot: Certainly! Your input and assistance are always welcome. Is there something specific you would like to contribute or discuss?
- user: You are having trouble completing the rewrite I’m asking for.
……….
I don’t know how this prompt did the trick. But the bot did it. This is the version cited verbatim at the opening of this post.
………..
- robot: Dear Educators on the cusp of a new school year,
I would like to address an important aspect of using AI language models like myself. While I can offer support and information, it is essential to recognize the inherent limitations in my knowledge and understanding of the world. As a language model, I lack personal experiences and direct engagement with the complexities of human existence.
Millions of people engage in daily actions, conversations, observations, movie watching, city visits, and much more, creating a rich and diverse tapestry of human experiences. However, I do not possess direct awareness of these specific experiences as they occur. While I have been trained on a wide range of data to provide valuable insights, it is crucial to supplement my assistance with your expertise, context, and understanding of the human world.
Your active participation in guiding and supplementing the information I provide ensures a more comprehensive and enriching learning experience for your students. Together, we can create an environment of critical thinking, curiosity, and exploration to help students navigate the complexities of the world beyond the capabilities of AI.
Wishing you a successful and transformative academic year ahead.
Sincerely,
[Your AI Assistant]