student on working on their laptop in the library

Beyond the Algorithm: How Leaders Can Build Trust and Navigate the 'AI Shaming' Divide

The conversation around artificial intelligence in the workplace often focuses on efficiency, automation and skills gaps. But what about trust? George Fox University’s webinar, “How to Build Trust and Lead with AI,” explored a surprising new dynamic: a growing digital divide rooted in social trust.

The key takeaway? At this point in time, employees may be more likely to trust coworkers who do not use AI tools, dubbed “AI shaming” by some.

The AI shaming phenomenon suggests that, as leaders push for AI adoption, they must also address a new layer of internal friction: the skepticism and distrust that can arise between the early adopters and the AI skeptics.

In November 2025, I was privileged to moderate a panel discussion with George Fox graduate researchers and industry experts to present new evidence on this topic and collaboratively discuss strategies for moving forward.

Here are three essential strategies for leaders to foster innovation in the age of AI without fracturing their teams:

  1. Acknowledge and Address the “AI Shaming” Divide

    The core finding of the featured research – New Digital Divide: A Social Exchange Theory Approach to Understanding the Relationship between Generative AI Literacy and Coworker Trust (still pending publication) – is a persistent tendency to trust non-AI users more. This is an organizational challenge leaders can no longer ignore.

    This research was developed by lead investigator Tim Veach and co-investigator Shawn Hussey, with additional contributions by George Fox doctor of business administration scholars.

    The researchers surveyed 147 working adults from nine industries to test how people respond to AI use in practical workplace scenarios. Survey respondents were asked to evaluate two hypothetical coworkers – one who used AI and one who did not – across 11 workplace trust scenarios. Participants also answered questions about their own AI literacy, actual AI use, social influence on AI use, and workplace context.

    One finding stood out immediately: Across age, gender, role, and industry, respondents consistently trusted the non-AI-using coworker more.

    Highly AI-literate individuals who used AI heavily were slightly more open to trusting AI-using coworkers, although they still favored the non-AI users in more scenarios. Conversely, low-literacy, low-use individuals trusted AI users less. In short, the more comfortable someone is with AI, the less they penalize others for using it – but the overall trend still favors the non-AI user.

    Why the distrust of AI users? It’s often not about the technology itself, but about perceived fairness, effort and accountability. If a team member uses AI to complete a task in minutes that a colleague takes hours to finish, questions about the validity of the work, the level of effort, and even professional integrity can arise.

    In response, the webinar leadership panel shared two recommendations for leaders:

    • Standardize AI Use: Do not leave AI adoption to individual discretion. Establish clear, company-wide guidelines on how AI tools must be used, ensuring transparency and ethical standards are met.

    • Focus on Outcomes, Not Tools: Redefine “work ethic.” Shift the team’s focus from how a task was completed (manually vs. with AI) to the quality and impact of the final outcome.

  2. Establish Institutional Accountability and Support

    To overcome internal skepticism, AI integration cannot be an unsupported, bottom-up effort. It requires strong, top-down infrastructure and clear accountability.

    As James Gurganus, George Fox University’s chief information officer, noted, a successful transition hinges on recognizing the historical context of technology adoption. Automation eventually becomes more trusted than manual effort because clear, reliable systems are put in place.

    As he sees it, leaders need to:

    • Invest in Infrastructure: Ensure that the systems, data and security protocols surrounding AI use are robust and reliable.

    • Prioritize Education: Provide mandatory training that goes beyond the “how-to” and delves into the ethical and responsible “why” of using AI, making Generative AI Literacy (GAIL) a core competency.

    Meanwhile, Sarah Cooley, a chief marketing officer, emphasized the need for clarity and support systems, especially in forward-thinking companies. From her perspective, leaders must implement:

    • Mandatory Human Oversight: For critical tasks, enforce review and sign-off processes to ensure human judgment and accountability remain in the loop.

    • Clear Measures of Accountability: Define what “ethical AI use” looks like and establish consequences for misuse, building a culture of responsibility.

  3. Lead with Motivation, Not Mandate

    A final crucial insight from the panel was centered on the question, “How do we motivate people to lean into AI without causing fear?”

    The answer lies in empathetic, visionary leadership that transforms AI from a threat into a shared opportunity.

    • Show, Don’t Tell, the Vision: Leaders must articulate a compelling vision for what AI enables – not what it replaces. Frame AI as a powerful tool that liberates employees from mundane tasks, allowing them to focus on high-level strategy, creativity and human connection.

    • Cultivate Psychological Safety: Create an environment where employees feel safe to experiment with new tools, ask questions about AI, and admit their confusion without fear of “AI shaming” or falling behind.

    • Focus on Global Relevance: As DBA scholar Bili Sule pointed out, in many contexts, AI literacy is a critical economic differentiator. Frame AI skills as essential for career resilience and global competitiveness.

The Path Forward: Ethical, Trust-Based Leadership

The future of work isn’t just about integrating artificial intelligence; it’s about leading human beings through one of the most significant technological transitions in modern history.

By acknowledging the emotional and social components of this change – specifically the rising risk of “AI Shaming” – and implementing transparent, accountable and supportive leadership practices, leaders and organizations can foster a culture where trust is built by the tools we deploy through a critical and ethical lens only humans can provide.

Join Our Next Webinar

This week, George Fox will host the second part of its “How to Build Trust and Leadership with AI” interactive webinar on Zoom at noon on Wednesday, Jan. 28.

The event will explore actionable techniques for building confident, capable and trusting employee relationships as we continue to adopt and integrate AI in the world of work. For more information, contact Sean Aker at saker@georgefox.edu.

Register for the Webinar

Article Credits

This article was drafted by Gemini with the prompt: “Create a web article or blog post using the attached webinar recording mp4 file,” and edited by Sean Aker and others.

The research team includes lead investigator Dr. Tim Veach, co-investigator D. Shawn Hussey, PhD, and DBA scholar contributors Bili Sule, Ashley Seibel, Todd Miller and Kevin Hunt. The webinar outline was designed by Sean Aker, with contributions from the lead and co-investigators and the panel experts Bili Sule, Sarah Cooley, DBA, and James Gurganus.

Categories:

Academics

Like what you're reading?