The Rise of the AI Champion: Pros & Pitfalls
Feb 16, 2026

Someone in your company just got nominated as the AI Champion.
Maybe it happened in a leadership meeting. Maybe it was a Slack message. Maybe it was a casual "you're good with tech, right?" over coffee. Either way, someone has been handed the mandate to figure out AI for the business.
Does that sound familiar?
It should. We've seen this movie before.
We Already Played This Game
In the early 2010s, Agile swept through businesses like wildfire. Suddenly every company needed Scrum Masters, Agile coaches, and transformation teams. An entire industry appeared almost overnight. Two-day certification courses churned out Certified Scrum Masters who'd never written a line of code, and they were sent into engineering teams to tell experienced developers how to work.
Some of them were brilliant. Most hit a wall.
I was part of that wave. I read everything there was about being an Agile Product Owner. I drank the Kool-Aid. And then I walked into an enterprise insurance business and hit the reality of a Project Management Office, Agile release trains, and a culture that didn't bend just because I had a new framework and good intentions.
The Dunning-Kruger effect is real. You can read all the books, attend all the workshops, and still not understand why your first sprint review falls flat. Knowledge and execution are not the same thing.
AI adoption is heading down the exact same path. The tools are maturing, the pressure is mounting, and companies are responding the way they always do: nominate someone to figure it out.
Deloitte's 2026 State of AI in the Enterprise report, surveying over 3,200 leaders across 24 countries, found that 37% of organisations are still at surface-level AI use with minimal process changes. The intent is there. Depth...well not so much.
Here's what you're actually signing up for.
The Case For an Internal Champion
Let's be fair. There are genuine advantages to having someone inside the business lead this.
They know the people. They know the politics. They know which processes are held together with duct tape and which ones actually work. That institutional knowledge matters enormously when you're trying to figure out where AI can make the biggest impact.
They can get quick wins on the board. An agent that summarises meetings. A skill that turns messy data into a clean report. A simple automation that saves someone an hour a day. These wins build momentum and buy goodwill.
And they're there every day. They can observe friction, spot opportunities, and iterate without waiting for a consultant's next visit.
That's the best case. Now here's what it actually takes.
What the Role Actually Demands
For an AI Champion to succeed, they need to be a very specific kind of person. Not just enthusiastic. Not just "good with tech." That same Deloitte report identified "insufficient worker skills" as the number one barrier to AI integration. IDC projects that over 90% of enterprises will face critical AI skills shortages by 2026, with the global gap putting an estimated $5.5 trillion at risk.
So what does this person actually need?
Technical depth. They need to understand how good software is developed, how systems integrate, how permissions and security work. The moment you're embedding AI into business processes, you're touching APIs, data pipelines, and access controls. An enthusiast with ChatGPT open in a tab isn't going to cut it.
Systems thinking. Not just first-order "this workflow is faster now" thinking, but second-order "what happens downstream when this workflow is faster" thinking. If you can only see local optimisations, you'll miss the bigger picture entirely — or worse, break something that was working.
A tinkerer's mindset. These tools reward people who play with them daily. Who sit with an LLM long enough to understand why this prompt works but that one doesn't. Why this skill produces inconsistent output. Why you need harnesses and guardrails around a model to get reliable results. If this person gets stuck and needs to ask for help constantly, it's not a dealbreaker — but it goes against the entire mantra of these tools. The whole point is that you can use them to figure it out.
Stakeholder credibility. This person needs clout. They need to be able to hold a room with senior stakeholders, push back when needed, and communicate change without alienating the people who are going to be affected by it. Curiosity matters here more than certainty. The moment they walk in thinking they have all the answers, they lose the ability to reframe and adapt.
Time. Not "Friday afternoons when things are quiet" time. Real, dedicated time to learn, experiment, fail, iterate, and build. If this role gets bolted onto everything else they already do, it's dead on arrival.
That's a unicorn. And most companies aren't hiring a unicorn. They're tapping someone on the shoulder.
The Pitfalls We've Seen Before
A 2025 MIT study found that 95% of generative AI pilots stall with little to no measurable impact on the bottom line. The pattern repeats. Early enthusiasm, a few demos, and then nothing scales. The reasons are worth paying attention to.
Mandates Without Agency
Here's what I'm hearing from friends in large tech companies here in Amsterdam: shareholders and C-level are asking "where's the AI innovation?" That urgency trickles down. Someone gets the title. They might get a Slack channel and a pat on the back. But no budget. No authority to change workflows. No air cover when someone pushes back.
If there's no real support, no achievable goals, no sandbox to experiment in, and everyone's already rolling their eyes, this person has been set up to fail. Leadership needs to back this with more than words. Constant communication about what this is trying to achieve. Clear constraints that give the champion room to operate without going off into the wilderness.
Without that, you've got a title and no agency. We saw it with Agile. We'll see it again with AI.
Local Optimisation Over System Design
This is the one that bites hardest.
Your champion builds a brilliant lead generation enrichment process. More leads flowing in than ever before. Everyone's impressed. But the team downstream that receives those leads can't handle the volume. They're overloaded. The data doesn't integrate cleanly into the CRM. What looked like a win creates a bottleneck somewhere else.
Sound familiar? It should. When companies adopted Scrum and continuous delivery, engineering teams started shipping weekly. But in B2B, the sales teams couldn't handle weekly releases. Customer support wasn't briefed. Customers were confused. The system wasn't ready for the speed of one optimised part.
The same thing will happen with AI and automation. One beautifully optimised workflow that doesn't account for the system around it isn't innovation.
Territorial Resistance
Every business has experts with deep tribal knowledge. People who don't have to think about how things work, they just know. They've built their processes over years, and those processes are effective.
When someone walks in with an AI mandate, those experts can feel threatened and it's a reasonable response.
The champion needs to understand that their job isn't to replace these people or dismiss what they've built. It's augmentation, not replacement. If the champion can't navigate that human dynamic and lean into the expertise that already exists and build on it, they'll face resistance that no tool can solve.
Some of that resistance will come from people who wanted the AI Champion role themselves. Some will come from people defending their processes because those processes are genuinely good. And some will come from people who are simply scared about what this means for their job.
All of it is valid. All of it needs handling with care.
The Product Owner Problem, Again
If you've worked in tech, you've seen this exact pattern play out with Product Owners.
Business people were given a title that put them at the intersection of business and engineering. They were expected to bridge that gap. But many of them had no background in product management, no experience working with technical teams, and no framework for making the trade-offs that role demands.
The first release fell over. Customers complained. Customer support wasn't briefed on day-zero issues. All the things that someone experienced would have anticipated. The known unknowns of the role were simply missed and not because the person was bad at their job, but simply because they hadn't done it before.
The AI Champion role is the Product Owner problem all over again. A well-intentioned person, often without the right background, dropped into a role that requires technical fluency, systems thinking, stakeholder management, and change leadership. All at once. On top of their existing responsibilities.
Getting It In Writing vs. Getting It Done
You can always ask an LLM to produce a rollout plan. It'll give you a beautiful one. Structured, comprehensive, with all the right headings.
But getting it in writing and doing it are two different things.
That's why someone fresh out of university with a degree can still hit a wall if they've never done an internship. That's why consultancies exist. That's why templates and frameworks still have value. The gap between knowing what to do and having the nuance to actually execute it is where experience lives.
An LLM can produce the knowledge and the action plans. It can even get some of the nuance from your context. But navigating the politics of a real organisation, knowing which battles to pick, understanding why a technically correct approach will fail culturally can only come from time spent doing it.
That same MIT study found that companies partnering with experienced external practitioners succeed with AI implementation roughly 67% of the time. Internal-only builds? About 33%. That's not a minor difference. That's double the success rate.
Companies that succeeded with Agile didn't just hand someone a book and say "go." They brought in experienced practitioners to support internal champions, and hired people that had worked that way before. The external partner accelerated the learning, filled the gaps the champion couldn't see yet, and helped avoid the mistakes that come from learning everything the hard way.
The companies that skipped that step? A lot of them are still doing waterfall with standups.
Bear This In Mind
None of this is to say don't nominate an AI Champion. Do it. Find that curious, technically capable person who wants to drive this forward. Give them real support, real time, and real authority.
But be honest about what the role actually requires. And be honest about the cost of the learning curve. Every week your champion spends figuring out what an experienced practitioner already knows is real time and real opportunity cost.
2026 is shaping up to be the year where external signals make it clear that you need to be something with the AI we have today, and the pressure is real. The gap between teams that get systems in place and those that don't will only widen and at an accelerating pace.
How you set yourself up for success matters. Choose wisely. And whatever you do, don't set your champion up to fail.
This article was co-written with AI. The ideas, experiences, and opinions are mine. The structure and drafting were developed through an interview-style conversation with Claude, then reviewed and edited by hand.

