AI is transforming how we work—from how we generate content to how we target, personalize and optimize at scale. The tools are getting smarter, and the pace is only picking up.
by Tammy H. Nam
But with all the focus on speed and efficiency, it’s easy to overlook a critical piece: how we use AI reflects what we value. And even though inclusion isn’t dominating headlines the way it did a year ago, it still matters. Not just because it’s the right thing to do but because it’s also good for business.
Inclusive marketing reaches more people, builds trust and prevents the kind of blind spots that erode performance and credibility. When we don’t catch these gaps, we risk more than just tone-deaf creative. We miss out on revenue, alienate potential customers and weaken long-term brand equity. As we embed AI deeper into our workflows, keeping that lens in place will help us build smarter systems—and stronger brands.
Why This Still Matters (Even If You’re Not Training Your Own Models)
Most companies today aren’t training their own AI models. We’re adapting off-the-shelf systems—like GPT or Claude—and layering them into existing tools and workflows. That means we’re not responsible for raw training data. But we are responsible for how we apply these systems and how their outputs show up in front of our customers.
Bias can still creep in—through prompts, inputs, filters and how content gets reviewed (or doesn’t). If we’re not intentional about it, we risk reinforcing stereotypes, narrowing our reach or excluding customers in subtle ways that can quietly hurt our brand over time.
Four Areas Marketers Should Keep an Eye On
- Smarter, Fairer Segmentation
AI helps us spot patterns, but it can also reinforce old ones. Regularly audit your audience segments to make sure you’re not unintentionally under-serving (or overlooking) key groups.
For example, if an algorithm consistently over-prioritizes high-income zip codes, you might miss an entire group of value-driven customers who would’ve responded to a different message. Research from Meta shows that even subtle changes in segmentation criteria can lead to major shifts in who sees what and who doesn’t.
- Content That Doesn’t Miss the Mark
Generative AI is great for scale, but it’s not great at nuance. Word choices and tone can subtly reflect cultural bias or dated stereotypes. Always review outputs, especially when targeting different markets or communities. Human oversight still matters. - Targeting and Pricing That Don’t Alienate
AI-driven targeting is powerful, but without guardrails it can cross ethical lines. The same goes for dynamic pricing. Keep your systems transparent, fair and accessible. No one wants to find out they’re being charged more because of who they are. - Representation in Creative
Image generation, copy suggestions, and asset selection tools are getting better, but they still rely on defaults that can narrow what (and who) gets shown. Guide your creative teams and tools to reflect the world we live in. Representation is relevance.
What Others Are Doing (and What We Can Learn)
This conversation isn’t just theoretical. More companies are taking action. Haleon, for example, built an AI-powered Health Inclusivity Screener to review digital ad content for readability and accessibility. It’s a reminder that inclusive marketing isn’t just a message; it’s something we can design for.
Research is catching up, too. A recent paper in Big Data and Cognitive Computing outlines a framework to help identify and reduce bias in digital marketing systems—something marketers should have in their toolkit as AI becomes more embedded in the creative process.
And according to the 2024 State of Marketing AI Report, over a third of companies now have formal AI ethics guidelines in place. That’s encouraging, but also a signal that the bar is rising.
A Quick Reality Check: How to Spot Bias Before It Spreads
You don’t need an in-house AI research team to do this well. Here’s what helps:
- Audit your outputs regularly. Don’t assume AI-generated content is neutral just because it’s automated. Review what your systems are producing across different campaigns, channels and use cases. Look for recurring themes, language patterns or visual defaults that might unintentionally narrow your message.
- Monitor performance across groups. Go beyond top-line metrics. If certain audiences are consistently underperforming—or being served less content—dig into the why. Is the messaging off? Is the targeting too narrow? Disparities in results are often a sign of deeper system biases.
- Test intentionally. Small tweaks in your prompt wording, audience rules or creative inputs can produce surprisingly different results. Set up A/B tests not just for performance but also for balance. What messages resonate, and with whom?
- Include diverse perspectives. Who’s reviewing the outputs? If it’s the same group of people every time, you’ll miss things. Make sure your QA and creative feedback loops include different voices across functions, backgrounds and lived experiences.
- Use bias detection tools. Many platforms now offer built-in fairness checks or integrations that can flag issues early. Use them not just as a final step but as part of your regular workflow, especially when automating at scale.
The Bottom Line
AI is here to stay, and it’s getting more powerful by the day. But power without thoughtfulness—especially when applied systematically—can do more harm than good. As marketers, we have a unique role to play in shaping how these systems are applied.
Done right, responsible AI isn’t a compromise. It’s a competitive advantage. Consumers are paying attention, especially Gen Z and Millennials, who are more likely to support brands that reflect their values. Leading with thoughtful AI use isn’t just a technical decision. It’s a brand statement.

Tammy H. Nam is CEO of Creatopy.
Photo by Taras Chernus on Unsplash