Mentoring Your Team Through the AI Transition: From Anxiety to Shared Understanding
By Nick Hixson

guestPosted by

Artificial intelligence has entered the workplace in a predictable way: leaders see opportunity; teams feel uncertainty.

On the surface, the conversation is about tools, productivity, and competitive advantage. Underneath, it is about role, value, and relevance. The technical questions are usually manageable. The human questions take more care.

If you want your team to engage with AI constructively, more demonstrations and performance targets will not resolve the tension. What helps is a mentoring approach that builds shared understanding. That is what turns anxiety into agency.

AI Is Rarely a Technology Problem

When people resist AI, they are not normally objecting to software. They are responding to what they think it signals.

Loss of control. Loss of competence. Loss of status. Loss of relevance.

You will hear this indirectly:

  • “I’m not sure it’s accurate enough.”
  • “It won’t understand our clients.”
  • “We’ve always done it this way.”

Behind these comments is a quieter question: Where do I fit if this becomes central?

If AI adoption is handled purely as a systems rollout, that question goes unanswered — and anxiety increases. If it is approached as a learning process, you create space to examine that question openly. This is where mentoring becomes strategic rather than symbolic.

This echoes something Peter Drucker understood about times of turbulence: “The greatest danger is not the turbulence itself; it is to act with yesterday’s logic.” Handing teams a tool without the conversation to go with it is exactly that.

Start With Shared Understanding

In any mentoring relationship, the most valuable thing is not the transfer of knowledge from one person to another. It is the creation of shared understanding between them.

The mentor brings experience, perspective, and the ability to ask questions that help the mentee see their situation more clearly. The mentee brings fresh eyes, current context, and often challenges the mentor’s assumptions in ways that prove unexpectedly useful. Both parties learn. Both leave the relationship changed.

Applied to AI, shared understanding means moving from “You need to use this tool” to “Let’s examine where this might improve how we think, decide or serve.” That shift signals exploration, not enforcement, and means building shared clarity around four things: what we are trying to achieve, why it matters, what concerns are legitimate, and what success looks like.

Without this, guidance slips into advice-giving. With it, the conversation becomes genuinely collaborative.

Address Competence Anxiety Directly

A concern you will hear often is: “I don’t know enough about this.”

This is frequently mistaken for scepticism. More often it is anxiety about being exposed — about being seen to not know something they feel they should.

Mentoring provides a safer environment to normalise experimentation. The message needs to be: everyone is learning, early outputs will be imperfect, exploration is separate from evaluation, and curiosity is expected.

A practical approach is to work on a live task together. Use AI to generate a draft, an analysis, or an option set. Review it critically. Identify strengths, weaknesses, and risks. Confidence builds through shared evaluation, not abstract reassurance. And it helps significantly when the mentor models their own learning openly — admitting uncertainty does not weaken authority, it strengthens credibility.

Move From Compliance to Ownership

Mandating usage may achieve surface-level adoption. It rarely builds commitment.

Ownership emerges when individuals see a connection between AI and their own professional standards. The mentoring question that unlocks this is: “How does this help you do your best work?” A finance lead using AI to stress-test assumptions. A marketer exploring customer themes at scale. An operations manager mapping bottlenecks before redesigning a process. These are not compliance stories — they are craft stories.

When AI is linked to identity rather than obligation, adoption becomes self-sustaining.

Create Ethical Clarity

Uncertainty about boundaries fuels anxiety more than almost anything else.

  • Is this secure?
  • Can we trust the outputs?
  • Should we disclose its use?
  • Where does accountability sit?

Avoiding these questions creates mistrust. Addressing them creates stability. Use mentoring conversations to clarify data confidentiality principles, verification expectations, disclosure standards, and accountability structures. When people understand the guardrails, experimentation feels safer — and governance built with the team creates ownership of the behaviour, not just compliance with it.

Maintain Balanced Realism

Mentors need to avoid two common errors.

The first is evangelism. Overstating AI’s capability damages trust quickly, and it will be found out. The second is avoidance — ignoring AI because it feels disruptive leaves teams exposed and, eventually, behind.

A steadier stance is more effective. Acknowledge limitations. Encourage disciplined experimentation. And be explicit about what remains distinctly human: ethical responsibility, relationship-building, contextual judgement, and strategic choice. AI can generate options — it does not carry accountability. It can analyse patterns — it does not hold contextual judgement. These human capabilities are not diminished by AI. In many cases, they become more important.

From Anxiety to Agency

Anxiety grows when change feels imposed. Agency grows when change feels shaped.

Mentoring provides the structure for that shaping. When mentors and mentees clarify purpose, surface concerns, define boundaries, run small experiments, and reflect honestly — AI shifts from being a threat to becoming a capability.

Organisations will not differentiate themselves through access to tools alone. They will differentiate themselves through people who feel confident, competent, and ethically grounded in using them. Technology adoption without human alignment creates friction. Human alignment without technological awareness creates stagnation. Mentoring is what integrates the two.

Introduce AI not as a revolution to endure, but as a capability to shape. That shaping begins with structured conversation, practical experimentation, and shared understanding. And that, in essence, is what good mentoring has always done.

About the Author:

Nick Hixson is a business advisor and writer on strategy and leadership. He explores how complexity and human behaviour shape organisations. He is a Peter Drucker Associate and chairs the Advisory Board of the World Institute for Action learning.

One comment

  1. Thank you for sharing this thoughtful blog posting. I find that your writing captures well an important challenge around AI in organizations — the psychological dimension of technological change. As you point out, the conversation is less about tools and more about purpose and competence. What matters most is how leaders help their teams think through why new tools matter and what new competencies and ways of thinking become increasingly important as they adopt them.

    Reading your blog posting, I was reminded of insights from Peter Drucker’s book “Management: Tasks, Responsibilities, Practices.” On page 126, Drucker suggests that effective planning begins by asking, “What new and different things do we have to do?” That question feels especially relevant today. AI challenges us not only to refine existing processes but also to rethink how value is created and how contributions are defined. On page 111, Drucker adds that the task of management is “to make resources productive.” When applied to AI, that includes making both human intelligence and artificial intelligence productive – in ways that reinforce one another rather than compete. More takeaways from reading the book here: https://app.thestorygraph.com/reviews/865b74c9-5bf5-4136-9ad5-728029e43c66

    Similarly, in the book “The Effective Executive”, Drucker mentions on page 1 with the call to “focus on getting the right things done.” This principle invites leaders to see AI not as a shortcut, but as a means to elevate their focus – to redirect human attention toward decisions, relationships, and innovations that require distinctly human judgment. A related insight from page 17 – to “use computers to free up time” – reminds us that technology has always been most valuable when it expands human capacity, not when it replaces human thought. More takeaways from reading the book here: https://app.thestorygraph.com/reviews/96982e28-a157-4cbe-867f-c29065f19581

Leave a Reply to Frank Calberg Cancel reply

Your email address will not be published. Required fields are marked *