top of page

From assistant AI to agent AI: Preparing for the next phase of AI

Two topics are coming up repeatedly in our conversations with executives and teams,

both of which need to be addressed for organizations to fully benefit from Agentic AI:

  1. The different pace of AI adoption across leaders and employees

  2. Uncertainty around accountability for Agentic AI outcomes

 

This year marks a tipping point: AI is no longer experimental. It is mainstream.
Many employees are already using generative AI chat tools such as ChatGPT,

Microsoft Copilot, Google Gemini, and Claude to get their work done more efficiently.
In the current usage model, AI serves as a writing or coding assistant.

This setup feels relatively safe because users can review the output and decide

what to keep or edit before “publishing” it under their name.
Mistakes, like obviously AI-generated emails without the right context,

are usually harmless, and sometimes even a bit amusing.
But the next phase of AI evolution will change this dynamic significantly.

What is agentic AI?

Agentic AI refers to systems that can take autonomous actions on behalf of humans, not just assist them.
These AI agents don’t wait for every instruction. They operate within defined goals, make decisions, initiate tasks, and adapt to feedback.

In enterprise settings, Agentic AI could:

  • Coordinate meetings and follow-ups

  • Generate and send reports

  • Monitor metrics and trigger alerts or actions

  • Manage portions of customer service and operations

  • Contribute to product design and software development

We are moving from “AI as a tool” to “AI as a teammate”.

Microsoft’s recent Work Trend Index outlines a three-phase evolution:

  • Phase 1: Human with Assistant (today)

  • Phase 2: Human–Agent Teams (collaborative autonomy)

  • Phase 3: Human-led, Agent-operated (agents take over routine and tactical work)

The report predicts a future where most employees will function as “agent bosses,” overseeing multiple AI systems that do the heavy lifting.

Challenge #1: The uneven rate of AI adoption

Unlike past tech transitions, we are observing a reversal, senior leaders are more eager and comfortable adopting AI than many of their employees.
While we haven’t conducted formal surveys, we have seen this trend consistently among the global professionals we train and coach through

Tulip Management. Microsoft’s report supports this observation.

In their table, “The Race to Be an Agent Boss” leaders were ahead of employees on every measure of adoption:

  • Familiarity with AI agents

  • Frequency of use

  • Willingness to trust AI with high-stakes tasks

  • Expectation to manage agents

  • Use of AI for strategic thinking

  • Perception of AI as a career enabler

  • Time savings (at least 1 hour daily)

 

Bridging the adoption gap

To ensure organizations gain the full value of AI, they must actively close the adoption gap. Here are a few practical steps:

  • Encourage leaders to regularly share their AI workflows and learnings with their teams

  • Create space for experimentation and promote psychological safety around trying new AI tools

  • Establish peer-led learning groups (it is easier to adopt as a group)

  • Facilitate regular discussions on successful use cases, challenges, risks, and edge cases

  • Provide role-specific, hands-on training

  • Over time, integrate AI adoption goals into performance and development plans

 

Challenge #2: Accountability for agentic AI outcomes

The more autonomy AI gains, the harder it becomes to answer: Who is responsible when something goes wrong?
Agentic AI may act in ways that were not explicitly reviewed or approved. It could make an incorrect hiring recommendation,

send flawed data to customers or generate biased analysis.

Would it be acceptable if AI is no more error-prone than humans? After all, people make mistakes too.
But if the nature of AI mistakes is different, or if accountability is unclear, who bears the responsibility?
The employee? The company? The technology provider?

It reminds me of the debate around self-driving cars:
When an autonomous system acts on its own, who is liable: the automaker, the tech provider, the driver, or the insurance company?!

So far, no clear international standards exist to assign accountability for AI-driven actions.
While governments, NGOs, and companies are working on frameworks for safety, privacy, fairness, and transparency,

which is encouraging. we are still far from consensus.


Until these questions are resolved, adoption of Agentic AI may remain slower in highly regulated industries or in sectors where

mistakes carry significant costs.
At the same time, we are likely to see faster adoption in operational areas where the risks are lower and the benefits,

such as speed, efficiency, and scalability, are immediately tangible.

Conclusion

Agentic AI has the potential to transform how work gets done. I believe it will become the “industrial evolution” of our generation.


A CTO I spoke with recently shared a powerful image:

“Newton once said he was like a child playing on the shore of knowledge, in front of a vast ocean.
When it comes to humanity’s opportunities with AI, we are not even at the shore yet, we are still 20 miles inland.”

The opportunity to start with agent AI is right now!

Organizations don’t need to wait for all the rules to be finalized, they can start experimenting with Agentic AI in low-risk,

well-defined areas while keeping an eye on evolving regulations.
Starting small helps teams build the trust and experience they will need to scale AI responsibly, and it is a great way to close

the internal adoption gap by building confidence across both leaders and employees.

bottom of page