In April 2025, the University of Melbourne and KPMG released what may be the most comprehensive global study on AI trust to date1. Led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School, and Dr. Steve Lockey, the research surveyed 48,340 people across 47 countries between November 2024 and January 2025, using representative sampling to capture a true cross-section of global attitudes.
This wasn’t a one-and-done survey or a tech industry echo chamber. The research team used rigorous methodology across diverse economies and cultures, from Silicon Valley to São Paulo, from Tokyo to Tel Aviv.
The goal was simple yet profound: understand how humanity is actually relating to AI as it rapidly infiltrates every aspect of our lives. Although we all have healthy skepticism about survey data (the publication does not provide the actual survey items, which limits how much can be said), the researchers are in it for the long haul, having collected longitudinal data.
The results reveal a phenomenon that researchers termed a “trust paradox,” a pattern that perfectly encapsulates our collective sleepwalk into AI dependence.
The Findings: A Paradox of Historic Proportions
The numbers tell a story of contradiction. Globally, 66% of people now use AI regularly, with widespread workplace integration becoming the norm. Yet trust remains surprisingly low, with only 46% willing to trust these systems.
In some developed nations like Australia, New Zealand, and the Netherlands, trust levels drop even further, with acceptance and optimism about AI ranking among the lowest globally.
But here’s where patterns become concerning: despite this fundamental lack of trust, usage behaviors reveal widespread recklessness.
The study found that many employees admit to using AI in ways that violate company policies. A majority rely on AI output without evaluating its accuracy (66% globally), leading to increased workplace errors.
Perhaps most troubling, many workers upload sensitive company information to public AI platforms, creating significant security risks.
Neglect of education compounds these risks. Most workers have received no AI training whatsoever, with only 28% of consumers globally having any formal education about these systems.
Large portions of users admit they don’t understand AI well, yet paradoxically, many show little interest in learning more about the technology they increasingly depend upon.
This isn’t a static situation. Based on longitudinal data, trust in AI has declined from 63% in 2022 to 56% in 2025, while worry has increased from 49% to 62%.
KPMG executives surmise that “trust has diminished because AI is advancing rapidly, while people’s understanding is not keeping pace with its adoption.”
The workplace has become ground zero for this disconnect. Workers hide their AI usage, present AI-generated content as their own, and create what amounts to shadow IT infrastructures built on deception and technological dependence. As KPMG’s Bryan McGowan warned, “We are at a critical turning point where AI is moving fast, but governance is lagging.”
The Three Stages of AI Sleepwalking
For me, the study’s findings suggest observable patterns in how societies are navigating AI adoption, patterns that resemble sleepwalking more than conscious choice:
Stage 1: Emotional Activation Without Understanding
The AI journey began not with knowledge but with intense emotions across the spectrum. Some people experienced excitement and eagerness. Others felt fear and anger, resistant to change, and worried about job displacement or loss of human agency.
But whether rushing toward AI or being dragged into it, both groups shared an under-appreciated factor: ignorance of what they were actually facing. With large majorities admitting they didn’t and still don’t understand AI well, we’re seeing populations making critical decisions based on emotion rather than comprehension.
Like sleepwalkers, they move with apparent purpose, some confidently, others anxiously, but without full awareness of their actions or destination.
This pattern appears across diverse contexts, from emerging economies embracing AI with optimism to developed nations approaching it with skepticism. The emotion varies, but the knowledge gap remains consistent.
Stage 2: Dependence Despite Distrust
Here the paradox crystallizes. Regardless of initial emotions, practical realities often force adoption. Competitors use AI and gain efficiency. Employers mandate AI tools. Industries standardize around AI platforms. The high usage rates coupled with low trust levels reveal an uncomfortable truth: people frequently use systems they fundamentally don’t believe in nor understand.
The workplace behaviors documented in the study like policy violations, hidden usage, unchecked acceptance of AI output aren’t necessarily signs of rebellion. They could be symptoms of this unconscious adaptation.
People navigate around obstacles (policies, governance, common sense) not always through deliberate choice but through the momentum of collective movement. The initial emotions, whether excitement or fear, become secondary to practical necessity.
This stage manifests differently across regions. In the United States, for instance, half of workers reported using AI tools without knowing if it’s allowed, while 44% knowingly use it improperly. These aren’t isolated incidents but widespread patterns suggesting systemic challenges.
Stage 3: Entanglement Beyond Extraction
The final pattern (the ghost of Christmas yet to come) represents deep integration. AI becomes embedded not just in workflows but potentially in thinking patterns, decision-making processes, and organizational culture. What started as a non-thinking application of a tool can become infrastructure, then become invisible.
At this stage, extraction would becomes increasingly difficult, beyond the reach of education. Those who hide their AI usage aren’t always just being deceptive. Many literally may not be able to imagine functioning without what they do not understand.
When majorities accept AI output without evaluation, they’ve effectively begun outsourcing elements of judgment.
The initial emotions that characterized Stage 1 whether enthusiasm or fear fade into irrelevance. The trust deficit documented in Stage 2 can become a permanent feature of the technological landscape. The relationship then might shift from using AI to existing within AI-mediated environments.
Understanding the Patterns
It’s important to note that we are talking at incredibly high levels of abstraction and speculating broadly. Nonetheless, the trust paradox rings true for me. Some organizations successfully build trust through transparency and training. Some individuals maintain critical distance while benefiting from AI tools. Some regions show different adoption patterns based on cultural and economic factors.
But whole armies of people use AI for consequential work who do not trust it.
What the study reveals isn’t a universal law but concerning tendencies. The gap between adoption and understanding, between usage and trust, creates conditions where unconscious dependence becomes more likely than informed choice. The difference would be education.
The Path Forward
The KPMG study offers both warning and opportunity. The patterns it reveals like high adoption with low trust, widespread usage despite poor understanding, suggest we’re at a critical juncture in humanity’s relationship with AI.
The solution isn’t to halt AI adoption. That’s neither possible nor desirable. Nor is it to blindly trust systems we don’t understand. Instead, we need conscious and organized strategies to come between naive enthusiasm and paralyzed fear. This requires:
Individual Actions: Before using AI for any task, spend time understanding its limitations. Question AI-generated content before accepting it. Seek out AI literacy resources, whether through employers, online courses, or community groups.
Organizational Responses: Companies need clear AI policies communicated effectively to all employees. Training shouldn’t be optional but embedded in workflow. Governance frameworks must evolve as quickly as the technology itself. And please: Education must stop arguing with itself and collaborate.
Societal Initiatives: Educational institutions need to integrate AI literacy into curricula. Policymakers must balance innovation with protection. Media must help citizens understand both capabilities and limitations of AI systems.
The study also reveals positive developments worth building upon. Organizations implementing thoughtful AI governance see better outcomes. Regions with higher AI literacy report more benefits and fewer errors. When people understand what they’re using, trust can grow alongside adoption.
The metaphor of sleepwalking isn’t meant to suggest we’re doomed to unconscious dependence. Sleepwalkers can be gently awakened. But awakening requires recognition of the state we’re in, not just as individuals but as societies.
Because if we don’t develop conscious relationships with AI soon, we risk finding ourselves in a world where the question isn’t whether we trust these systems, but whether we remember what it meant to trust our own judgment in the first place. The choice between sleepwalking and consciousness remains ours to make.
https://figshare.unimelb.edu.au/articles/report/Trust_attitudes_and_use_of_artificial_intelligence_A_global_study_2025/28822919
Terry, thanks for this - you are so right about the recklessness of accepting technology you don’t understand or trust. I continue to think AI literacy needs to be based in ethics. To be continued 😉
Terry, you are the lighthouse on the coast of shoals.