5 Comments
User's avatar
Martha Nichols's avatar

Terry, thanks for this - you are so right about the recklessness of accepting technology you donโ€™t understand or trust. I continue to think AI literacy needs to be based in ethics. To be continued ๐Ÿ˜‰

Expand full comment
Malcolm J McKinney's avatar

Terry, you are the lighthouse on the coast of shoals.

Expand full comment
Malcolm J McKinney's avatar

"Regardless of initial emotions, practical realities often force adoption. Competitors use AI and gain efficiency. Employers mandate AI tools. Industries standardize around AI platforms. The high usage rates coupled with low trust levels reveal an uncomfortable truth: people frequently use systems they fundamentally donโ€™t believe in nor understand."

Expand full comment
Marion van Engelen's avatar

Thanks for your continued addressing of this issue. I realise that I sometimes - me, the AI critical thinker! - accept the AI answer a Google question provides without checking the provided link to a website. If I contemplate why, I have to admit that at that moment, I felt a certain laziness in critical thinking about AI. The answer sounds reasonable, so it's probably true. Add to that a vague promise to self to factcheck at a later time and we have maybe what you described above. If that happens to me, then I fear the worst for many.

Expand full comment
Terry Underwood, Ph.D.'s avatar

It depends partly on the importance of the information. I've gotten pretty used to verifying everything if I take any action on it, including mentioning it in conversation. The bot has made me a much more careful purveyor of information. I'm much more ready to hedge than ever before, and to reveal that the bot was the source.

Expand full comment