In April 2025, the University of Melbourne and KPMG released what may be the most comprehensive global study on AI trust to date. Led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School, and Dr. Steve Lockey, the research surveyed 48,340 people across 47 countries between November 2024 and January 2025, using representative sampling to capture a true cross-section of global attitudes.
Terry, thanks for this - you are so right about the recklessness of accepting technology you donโt understand or trust. I continue to think AI literacy needs to be based in ethics. To be continued ๐
"Regardless of initial emotions, practical realities often force adoption. Competitors use AI and gain efficiency. Employers mandate AI tools. Industries standardize around AI platforms. The high usage rates coupled with low trust levels reveal an uncomfortable truth: people frequently use systems they fundamentally donโt believe in nor understand."
Thanks for your continued addressing of this issue. I realise that I sometimes - me, the AI critical thinker! - accept the AI answer a Google question provides without checking the provided link to a website. If I contemplate why, I have to admit that at that moment, I felt a certain laziness in critical thinking about AI. The answer sounds reasonable, so it's probably true. Add to that a vague promise to self to factcheck at a later time and we have maybe what you described above. If that happens to me, then I fear the worst for many.
It depends partly on the importance of the information. I've gotten pretty used to verifying everything if I take any action on it, including mentioning it in conversation. The bot has made me a much more careful purveyor of information. I'm much more ready to hedge than ever before, and to reveal that the bot was the source.
Terry, thanks for this - you are so right about the recklessness of accepting technology you donโt understand or trust. I continue to think AI literacy needs to be based in ethics. To be continued ๐
Terry, you are the lighthouse on the coast of shoals.
"Regardless of initial emotions, practical realities often force adoption. Competitors use AI and gain efficiency. Employers mandate AI tools. Industries standardize around AI platforms. The high usage rates coupled with low trust levels reveal an uncomfortable truth: people frequently use systems they fundamentally donโt believe in nor understand."
Thanks for your continued addressing of this issue. I realise that I sometimes - me, the AI critical thinker! - accept the AI answer a Google question provides without checking the provided link to a website. If I contemplate why, I have to admit that at that moment, I felt a certain laziness in critical thinking about AI. The answer sounds reasonable, so it's probably true. Add to that a vague promise to self to factcheck at a later time and we have maybe what you described above. If that happens to me, then I fear the worst for many.
It depends partly on the importance of the information. I've gotten pretty used to verifying everything if I take any action on it, including mentioning it in conversation. The bot has made me a much more careful purveyor of information. I'm much more ready to hedge than ever before, and to reveal that the bot was the source.