"We’re entering an era where the most creative humans might be those skilled at collaborating with machines." Well, I understand what you mean here but this has been a phenomenon for a very long time. Setting aside musical instruments as machines, producers have been using digital samplers and sequencing machinery for decades. This AI machine, though, happens to operate in a particularly fluent way.
I recently inquired about the difference between the meaning of "reasoning" in the AI world compared to the human world. The problem here is that too many of the descriptions of AI capabilities have been characterized according to anthropomorphic metaphors simply for convenience when, in fact, they are quite different.
Basis of Reasoning: Humans - Grounded in embodied experience, memory, emotion, perception, social context. AI - based on statistical correlations in training data (textual patterns).
Knowledge Acquisition: Humans - interactive, sensorimotor, social learning, real-time experience; AI - Passive ingestion of large textual datasets (e.g., books, websites, forums).
Understanding: Humans - Semantic, experiential, and situational understanding. AI - Pattern recognition and probabilistic modeling (no true "understanding").
Motivation: Humans - Driven by goals, needs, affect, and context. AI - Has no goals, intentions, or motivation of its own.
Interpretation of Ambiguity: Humans - Informed by culture, psychology, emotional nuance AI - Relies on textual precedent; struggles without clear patterns or external validation.
Novel Inference. Humans - Capable of inventing or imagining based on limited data or abstract principles. AI - Tends to interpolate from training data; struggles to extrapolate without precedent.
Grounding: Humans - Grounded in reality via perception and action. AI - Ungrounded, operates in a symbolic space without sensory input or lived experience.
While these differences are obscured by the fluency of AI output, they are (IMHO) irresolvably different and discrete forms of existence.
Check out "Artificial Intelligence as a New Form of Agency (not Intelligence) and the
Multiple Realisability of Agency Thesis" by Luciano Floridi. It's high-level academia, but it perfectly articulates how AI is not actually intelligent as we understand it in our own terms.
It does. Floridi has it right. I've got a bunch of pieces on him in this space. LoA (2008) is among the most useful of his methodical frameworks. It helps theory build observables both discrete and countable and thereby intensifies affordances of AI-assisted qualitative research methodology. It is a machine and can never be anything else, but it has its uses for sure. Keep on keeping' on!
I’d like to work with students to help them see that more is not better. Call me crazy, but I think the human - AI blend has a future which is good for all.
I do, too, Joanne. I don’t think I’m losing any of my ability or desire to write. I do find value in experimenting with the bot. I don’t feel I am dependent on it. Even now, as I write this, I hope my basic humanity is showing through. I can’t imagine presenting any piece of writing as mine if I don’t believe it represents my full meaning. I own everything I say, you know what I mean?
It is entirely possible that I am grabbing for something that looks solid, but I teach kids, who need to discover their own strengths and possibilities as writers before they ride shotgun with AI - which they ultimately will do.
There's a fundamental distinction to be clearly and forcefully made here, as Brad may be hinting towards: it is one thing to have learned writing via the "old ways" and using those ways and means to "instrumentalize" stochastical generative systems; learning "writing" from scratch via gen AI is a whole different animal. A crippled animal, might one argue...
I think the issue is that kids who learn to write independently can learn how to push back and work with AI intelligently; whereas, kids who learn to learn to write with AI riding shotgun may not have the confidence and knowledge to push AI or even use AI creatively - AI might seem like that super wise, older kid that they would never question.
Developmental research is badly needed to explore the issues you raise. Right now we are all captives of our own subjectivity. I’ve not seen much work in this area. My bet is it will take a while to get a better
handle on this. It used to be people believed kids should not be taught to read until they
"We’re entering an era where the most creative humans might be those skilled at collaborating with machines." Well, I understand what you mean here but this has been a phenomenon for a very long time. Setting aside musical instruments as machines, producers have been using digital samplers and sequencing machinery for decades. This AI machine, though, happens to operate in a particularly fluent way.
I recently inquired about the difference between the meaning of "reasoning" in the AI world compared to the human world. The problem here is that too many of the descriptions of AI capabilities have been characterized according to anthropomorphic metaphors simply for convenience when, in fact, they are quite different.
Basis of Reasoning: Humans - Grounded in embodied experience, memory, emotion, perception, social context. AI - based on statistical correlations in training data (textual patterns).
Knowledge Acquisition: Humans - interactive, sensorimotor, social learning, real-time experience; AI - Passive ingestion of large textual datasets (e.g., books, websites, forums).
Understanding: Humans - Semantic, experiential, and situational understanding. AI - Pattern recognition and probabilistic modeling (no true "understanding").
Motivation: Humans - Driven by goals, needs, affect, and context. AI - Has no goals, intentions, or motivation of its own.
Interpretation of Ambiguity: Humans - Informed by culture, psychology, emotional nuance AI - Relies on textual precedent; struggles without clear patterns or external validation.
Novel Inference. Humans - Capable of inventing or imagining based on limited data or abstract principles. AI - Tends to interpolate from training data; struggles to extrapolate without precedent.
Grounding: Humans - Grounded in reality via perception and action. AI - Ungrounded, operates in a symbolic space without sensory input or lived experience.
While these differences are obscured by the fluency of AI output, they are (IMHO) irresolvably different and discrete forms of existence.
Check out "Artificial Intelligence as a New Form of Agency (not Intelligence) and the
Multiple Realisability of Agency Thesis" by Luciano Floridi. It's high-level academia, but it perfectly articulates how AI is not actually intelligent as we understand it in our own terms.
It does. Floridi has it right. I've got a bunch of pieces on him in this space. LoA (2008) is among the most useful of his methodical frameworks. It helps theory build observables both discrete and countable and thereby intensifies affordances of AI-assisted qualitative research methodology. It is a machine and can never be anything else, but it has its uses for sure. Keep on keeping' on!
I’d like to work with students to help them see that more is not better. Call me crazy, but I think the human - AI blend has a future which is good for all.
I do, too, Joanne. I don’t think I’m losing any of my ability or desire to write. I do find value in experimenting with the bot. I don’t feel I am dependent on it. Even now, as I write this, I hope my basic humanity is showing through. I can’t imagine presenting any piece of writing as mine if I don’t believe it represents my full meaning. I own everything I say, you know what I mean?
It is entirely possible that I am grabbing for something that looks solid, but I teach kids, who need to discover their own strengths and possibilities as writers before they ride shotgun with AI - which they ultimately will do.
There's a fundamental distinction to be clearly and forcefully made here, as Brad may be hinting towards: it is one thing to have learned writing via the "old ways" and using those ways and means to "instrumentalize" stochastical generative systems; learning "writing" from scratch via gen AI is a whole different animal. A crippled animal, might one argue...
I think the issue is that kids who learn to write independently can learn how to push back and work with AI intelligently; whereas, kids who learn to learn to write with AI riding shotgun may not have the confidence and knowledge to push AI or even use AI creatively - AI might seem like that super wise, older kid that they would never question.
Developmental research is badly needed to explore the issues you raise. Right now we are all captives of our own subjectivity. I’ve not seen much work in this area. My bet is it will take a while to get a better
handle on this. It used to be people believed kids should not be taught to read until they
reach nine years old.