I keep hearing CMOs talk about AI as if it’s a tooling problem. What platform to adopt. Which model to use. How fast their teams can ship with it.
That’s not what’s breaking.
What’s breaking is that marketing organizations were already over-optimized for output, and AI just made that impossible to ignore. When everyone can produce competent work at scale, the old signals of progress – volume, velocity, channel activity – stop meaning much. The dashboards still light up. The work still ships. But differentiation quietly disappears.
That’s the backdrop for this conversation with Johann Wrede. Not “how AI is changing marketing,” but how it’s forcing marketing leaders to confront what they’re actually accountable for.
Johann doesn’t frame the CMO as a creative lead or a growth hacker. He frames it as a business role. That sounds obvious until you sit in a few planning meetings and watch how often marketing decisions are justified by craft, convention, or habit rather than outcomes. His career path – engineering, sales, marketing, then the C-suite – shows up in how little patience he has for silos. He’s seen too many teams optimize their piece of the machine while the business result goes sideways.
One thing he said stuck with me: most of his time is no longer spent “doing marketing.” It’s spent aligning budget, talent, and priorities so marketing effort actually connects to company objectives. That shouldn’t be surprising. And yet it is, because many CMOs are still rewarded – explicitly or implicitly – for shipping things, not for changing how decisions get made.
AI raises the stakes on that misalignment.
Execution is no longer scarce. Judgment is. And judgment doesn’t scale by default.
Large language models are very good at producing work that looks right. They are also very good at converging on the middle. If you let them, they will flatten your brand voice, sand down uncomfortable edges, and give you something that passes review without offending anyone. Which is exactly how most marketing dies.
This is where the intern analogy actually holds, with one caveat. AI doesn’t stay an intern for long. It becomes a fast, competent contributor that will happily run in whatever direction you point it. The quality of the output is a direct reflection of the quality of the constraints. If the brief is vague, the work will be generic. If the standards are unclear, the results will drift.
The CMOs who are doing well here aren’t prompting for more content. They’re prompting for critique. They’re asking models to argue against the idea, identify where the message collapses into cliché, or explain how a customer could misread what’s being said. They’re using AI to surface weak thinking, not to cover it up.
And they are very clear about what AI is not allowed to do.
It is not allowed to originate point of view.
It is not allowed to decide what matters.
It is not allowed to speak on behalf of the brand without supervision.
That last one is where things get messy in practice.
As more companies rush AI-powered experiences into market, you can see the cracks. Assistants that are fast but tone-deaf. On-brand language that feels oddly hollow. Interactions that technically work but leave customers uneasy. This is why, somewhat counterintuitively, in-person experiences are having a moment again. Not because digital stopped working, but because physical moments are harder to fake.
Real events create friction. They carry social cost. They reveal whether what you say matches how you behave. When everything online starts to sound the same, those moments stand out.
This is also why UserTesting continues to see demand for direct human insight. Synthetic users can help you explore scenarios. They cannot tell you how something lands. That still requires people. Not personas. Not averages. Actual humans reacting in real time.
There’s a temptation right now to blur the line between bots and people, to make AI feel more human than it is. I think that’s a mistake. Most users don’t want deception; they want clarity. They want to know when they’re interacting with an assistant, what it can do, and how to reach a person when the stakes are high. Trust doesn’t come from pretending. It comes from setting expectations and meeting them.
All of this points to a bigger issue that AI is forcing into the open: most marketing organizations are still designed around output, not outcomes.
If your org chart, roles, and incentives look the same next year, none of this will matter. You can adopt every tool on the market and still get the same results, just faster. The real work is redesigning how decisions get made – who sets standards, who owns judgment, and how quality is protected when speed is no longer a differentiator.
Curiosity matters here, but not in the motivational-poster sense. It matters as an operating habit. Hiring people who ask better questions. Rewarding teams for finding problems early, not for polishing work late. Teaching marketers to pressure-test their own thinking before the market does it for them.
AI can help you move faster. It can help you see patterns. It can help you explore options you wouldn’t have had time to consider before. What it can’t do is decide what’s worth doing in the first place.
That part of the job never went away.
It just got harder to fake.

