Futureproofing your Brand for AI Agents

By Adam Kleinberg
Every marketing leader we talk to right now is somewhere on the same journey — past the hype, over the demos, and into the harder questions. How can AI Agents actually deliver business outcomes? And for many marketers — where do we even begin?
That's the conversation we set out to have at our latest Futureproof Project session. Twenty-six leaders joined us, ranging from CMOs at large enterprises to consultants embedded in financial services, tech, and creative orgs. Some have been building agents for two years. Some are just getting their feet wet (building “baby agents!”). All of them are trying to figure out how to move with purpose in a space that's changing faster than anyone can keep up with.
The meeting began with a conversation with Amit Shah, founder of InstaLILY and former president of 1-800-Flowers, where he helped grow the company from $500M to $2B in revenue. Before AI was a boardroom word, Amit was operating at the intersection of code and culture, which is the lens he brought to this conversation.
Here's what stood out from our session on AI Agents.
The Arc: Code became culture.
Amit traces 70 years of marketing through one lens: the evolving relationship between code and culture.
- Broadcast gave us coherence.
- The internet made marketing participatory.
- Algorithms became editors of meaning.
- Generative AI made creativity co-authored.
Now we're in the agentic era. Code is teaching us autonomy by being autonomous. As Amit put it, "Marketing no longer moves with human mediation. The code is moving on its own."
You're not just managing new technology. You're navigating a new kind of workforce.
The Context Moat.
Every conversation I have about agents orbits one word: context.
The tools are table stakes. You cannot build a moat around code. Software can now be replicated easily with AI. The differentiator is what you bring to them.
As Amit put it, context is not equal to storage.
"Context is not equal to storage. There is a huge desire to treat it like Google Drive — load everything up and the system gets smarter. Actually, it causes what we call context rot."
Amit Shah, InstaLILY
Three practical implications:
- Layer your context. Think of it like onboarding a new employee — role-based, hierarchy-based access. You don't forward your CEO conversations to the whole team. The same principle applies to your agents.
- Separate your context layer from your inference layer. Don't let ChatGPT or Claude hold your context. Keep it separate, let them connect to it. Own your intelligence rather than rent it.
- Capture feedback systematically. When someone gives a thumbs down on an AI output, where does it go? In most organizations, nowhere. Amit's framing: "Six people in your department have said we don't do brand this way. That should become an infused rule."
Explicit vs. implicit signals.
The difference between a tool you use and a platform that gets smarter is how it learns. For example.
Explicit: InstaLILY monitors recorded calls — when a pattern appears repeatedly (say, a company intro running over 90 seconds), the system flags it as a suggested SOP. It becomes a rule: "Keep it to 90 seconds. We've seen this."
Implicit: If enough people on the team use an initial capital "Insta" and always capitalize "LILY," the system learns the brand standard “InstaLILY” and applies it automatically.
Something often overlooked is what not to include matters as much as what to include. Context isn't just additive. The subtractive form — what's irrelevant to this person, this role, this task — also compounds.
Three criteria for evaluating an agentic initiative.
Sumantro Das, Amit’s Co-Founder and COO at InstaLILY AI offered these three criteria.
- 1.Impact — Can this make us more money or drive north-of-90% efficiency gains? That's actually the bar. Incremental isn’t reaching high enough.
- 2. Operability — Three parts of operability:
> Observability: Can you see what the agent is doing and why
> Reinforcement: Where is your team's feedback going, and what are you learning from it?
> Development: How are you upskilling your existing team alongside the agents? - 3. Asset Development — There's a meaningful difference between building your own AI assets and renting someone else's. The shift isn't happening in the front-end software — it's in the memory layers and proprietary models companies are training on their own data.
Fail faster.
Someone asked whether speed or agility matters more right now.
Amit's answer: Speed, every time.
"The marginal cost of rebuilding yourself is almost very close to zero. So you should just fail faster, because then you can start rebuilding quicker."
Amit Shah, InstaLILY
Grant McDougal, CEO of AI marketing platform, BlueOcean.ai, told the audience rebuilt a competitor's LLM share-of-voice measurement tool from scratch — customer request to enterprise deployment — in a single day.
According to Grant, "There's no competitive advantage in software anymore. The competitive advantage is derived from unique data access."
The software is just the container. The data is the secret sauce.
As a leader, that means creating a culture where failing fast isn't just tolerated — it's expected.
Build fast. Learn fast. Rebuild fast.
The trust factor.
One truth to reckon with is that people are building tools that nobody uses. The reason is trust. Many humans still aren’t confident in the results they get from AI.
Sometimes distrust isn't a problem — sometimes it's even healthy. One call participant pointed out, "You're accountable for the quality of the work. I wouldn't trust a junior writer 100% either."
The problem is when distrust becomes paralysis. It takes much less time to edit than to create. If an agent gets you 80% of the way there, that's still a massive time save. The human-in-the-loop isn't a workaround. It's the architecture.
For example, Traction has built an AI research workflow that flips traditional research on its head using synthetic audiences. When we tested in on a real client situation and then validated the outcome by uploading 30 transcripts of real stakeholder and customer interviews, we used Gemini how close it had gotten. It told us 70-90%. We then asked the strategy director who actually led the interviews. His response? “I’d say about 80%” We were then able to plug the holes in the process by asking what gets us to that last mile of confidence?
So the leadership challenge isn't just building the tools — it's building the habits around them. Adoption doesn't happen at launch. It happens when people see their peers actually using it, and getting better results because of it.
The "Baby Agent" reality.
Most organizations are still in baby agent territory, and that's a legitimate place to start.
Practical advice: do a repeatable-task audit. Every role, every recurring task, no matter how small. Even automating a 20-minute monthly report starts to compound — and it gets your team believing before you ask them to tackle the bigger stuff.
Start where you can move fast. Show impact. Build confidence. THEN expand.
Your next customer is an agent.
If you're a marketer, your next customer is an AI agent. Your brand needs to be machine readable.
Agents will be making purchasing recommendations, evaluating vendors, surfacing choices to humans. If your brand isn't legible to AI systems — positioning unclear, differentiation unstructured — you're invisible in an agent-mediated buying process.
Brand may be the most defensible moat in an agentic world. The things that make brands human-resonant matter more as agents become intermediaries.
Key Takeaways
- You're building an intelligence layer, not a software layer. Context, memory, and feedback loops are your advantage. Start building them now.
- Fail faster on purpose. The cost of rebuilding is near zero. The cost of waiting compounds every month.
- The baby agent is a real starting point. Automate the 20-minute task. Let the team feel the unlock first.
- Trust is built, not assumed. Surface the verification process. Human checkpoints aren't overhead — they're the design.
- Make your brand machine-readable. Positioning clarity is becoming a structural necessity, not just a creative one.
Requests from the community.
The Futureproof Project is Traction's community for marketing leaders navigating the AI transformation.
:
- You have a topic you would like to explore in one of our upcoming virtual sessions.
- You would like to join or have someone you think would benefit from the community.
- Your organization needs support with a specific AI challenge you would like to further explore with our team.
Check out our 2026 AI Marketing Guide.

Adam Kleinberg has been CEO and a founding partner of Traction since 2001. He has written over 100 articles in publications like AdAge, Adweek, Fast Company, Forbes, Mashable and Digiday and spoken at dozens of industry conferences. He's led Traction to win Agency of the Year awards from AdAge, ANA B2 Awards, CampaignUS, and in 2025, he was recognized as one of the Campaign 40 Over 40 game-changers in marketing and advertising.

I've been thinking a lot about who gets left behind in the AI economy. The answer troubled me enough to do something about it. So, we're joining Mighty Humans.

$8 mocha, yes. $3 shipping, no. $200 concert tickets, yes. Grocery coupons, also yes. And if that logic sounds contradictory, you're not paying close enough attention to the consumer you're trying to reach.

The job of the CMO is changing faster than our ability to adapt to it. Tools got more powerful. Organizations got more complex. Expectations from customers AND boards? Through the roof. Somewhere in all of that, the role of CMO started pulling apart at the seams.