On Demand Webinar

Nobody has AI figured out. Here's what it actually looks like inside two marketing teams that are trying.

How are you going to stand out in 2026? 

Buyers are in control of the buying process. Over 70% of it is happening online. And they are putting companies on their short list of vendors that provide value and establish trust with their content.

How are you accounting for this in your 2026 strategy?

You need to stand out. To make sure your buyers notice you. Let us help you build that into your strategy for next year -->

Industry
Size
Location
Solution

The Revenue Marketer

Webinar
|
Artificial intelligence

Still chasing 10% efficiency gains?

2023-2025 were the years of experimentation with AI. 2026 is the year to make it transformative. If you're still trying to figure out the strategic advantage AI will bring to your team, we're here to help!

If you believed everything you read on LinkedIn, you'd think you missed the boat.

Every other post is a CMO announcing their fully AI-integrated marketing function, their agentic workflows humming along, their team operating at twice the output with half the headcount. It reads clean. It reads confident. And according to the two marketing leaders Inverta sat down with for this conversation, it's mostly fiction.

"Whoever says they're an AI expert is lying," said Lisa Harrup Mieuli, CMO at Gigamon. "We're all learning."

That honesty is exactly what made this conversation worth having. Lisa and Colleen Goldblatt, Senior Director of Global Marketing Programs at Qlik, spent an hour with Inverta's Kathy Macchi talking about where their teams actually are with AI — the experiments that worked, the ones that didn't, the governance structures they've built, and the counterintuitive conclusion both of them have reached about where AI is pushing marketing next.

Here's what they said.

The forcing function: when AI becomes a company priority, not just a team experiment

Both Lisa and Colleen had a version of the same story. Their teams started experimenting with AI individually — content generation here, a workflow shortcut there — until it became clear that scattered experimentation wasn't the same as organizational progress.

At Gigamon, Lisa's team created what she called an "AI tiger team" inside the marketing department. Rather than asking everyone to figure it out on their own, they defined specific use cases, brought in an external AI expert for organization-wide training, and moved through use cases one at a time. Content first. Then personalization for the BDR team. Then campaign execution speed.

"If you try and do everything, then you can't be successful at anything," Lisa said. "One step at a time."

The other key move: making AI optimization an explicit company objective — one of four "big rocks" for the year. That framing did two things. It gave every team member permission to spend time on AI work without feeling like they were dropping other priorities. And it created a visible mandate that the board could track.

At Qlik, the path to governance looked different. Colleen described watching AI get adopted simultaneously across every function — marketing, sales, customer success, finance — with each team making independent choices about tools, data access, and use cases. The cross-functional AI council wasn't just about marketing. It was about preventing an organization-wide governance gap before it became a security or cost problem.

"All of a sudden, I have access to the CS team's data, or finance has access to everything," Colleen said. "How do you start to govern that? How do you open that up in ways that are fit for purpose?"

The council addressed two things at once: the structural question of who has access to what, and the skill set question of whether teams actually have the discernment to evaluate what AI is giving them. "If we couldn't do that," Colleen said, "then we were scaling risk in our business, not scaling our business."

What the board actually means when they say "do more with AI"

Both Lisa and Colleen have had versions of the same board conversation, and both were candid about the gap between what boards ask for and what they understand.

Lisa's read: boards see cost, efficiency, and headcount reduction. They want to know if AI can do the work of people who would otherwise need to be hired. The job of the CMO is to reframe that conversation — to show AI as an efficiency multiplier that lets existing teams do more strategic work, rather than a replacement for the team itself.

Each board meeting at Gigamon now includes an update on where AI is being implemented, where it's working, and where it hasn't. That transparency, Lisa said, is what keeps the expectation conversation honest.

Colleen's read: the gap is less about intention and more about understanding. "The models are lovely," she said. "The maturity curves are nice and clean and tight and tidy. But when you get down into what it actually takes to make this work, understanding your systems and how healthy your systems are is so incredibly important."

By systems, she means everything from data cleanliness to process definition. Clean brand guidelines. Structured product documentation. Workflows that are actually documented rather than living in someone's head or getting resolved by committee in a meeting. The assumption that these things are ready is usually wrong. The work of getting them ready is significant, and it's work that doesn't show up neatly on a board slide.

Lisa added one more dimension that boards tend to underestimate: brand protection. The risk isn't just that AI produces inaccurate content. It's that AI produces content that reads like AI — content your customers can clock instantly. "You need to protect your tone of voice," she said. "Your customers can believe and not instantly say, oh, AI produced that."

Colleen's counter-observation: if the inputs are clean enough, AI can actually improve content consistency. The challenge is that getting inputs to that level of cleanliness is exactly the hard work most teams haven't done yet.

What's actually working

The most useful part of the conversation was the specifics. Not frameworks — actual examples of what they've tried and what happened.

Colleen's event audience acquisition strategy

Colleen's team used ChatGPT as a thought partner to build a six-month event audience acquisition strategy — from scratch, in roughly a day.

The approach: upload everything relevant first. Event details, messaging frameworks, persona targets, registration goals, last year's data, previous email performance. Give the model full context before asking it to do anything.

Then build the strategy iteratively. What does each audience segment need to see before they'll register? At what point in the six-month runway does each piece of content land? What does the weekly messaging calendar look like by channel? What's the message permutation for each audience in each week?

Once the strategy was built, she asked ChatGPT to format the tracking structure in Smartsheet — and every channel involved in the event could start planning against a shared framework, well ahead of when they'd normally be scrambling.

"Doing that work took me about a day," Colleen said. "Building that strategy in previous years would have been ad hoc over six months."

Lisa's content workflow

Gigamon's content team has built agents that move content from a source document — a white paper, a messaging doc — to multiple finished formats without each conversion requiring a separate human briefing. A blog post, a social variant, a campaign email can all flow from a single approved source.

The human stays in the loop at the checkpoints that matter: reviewing the source messaging doc before it goes into the workflow, and reviewing the outputs before they go live. What's changed is the speed between those checkpoints.

"The speed to content and the speed to campaign execution has just been extremely faster," Lisa said.

The BDR team is seeing similar gains. Personalization at scale — understanding an account's business context well enough to write a genuinely relevant outreach — used to be a luxury reserved for one-to-one ABM programs. Now it's closer to a standard workflow capability.

Where it hasn't worked yet

Kathy asked both of them to be honest about where they've tried something and it didn't land. The answers were instructive.

Colleen pointed to AI-generated video. Not a failure, exactly — but a lesson in how much human review the medium still requires. Her creative colleague spent a day combing through finished video, checking every mirror reflection and window in the background for unexpected hallucinations. "I wouldn't say it didn't work," Colleen said. "I just don't leave it alone yet."

She raised a related concern that's easy to miss in the efficiency narrative: as AI speeds up production, it creates new bottlenecks downstream. More content going to legal for review. Legal teams not staffed for the volume. The humans at the checkpoints getting more overwhelmed, not less. "How are you protecting those humans?" she asked. "How are you reducing workload there?"

There was also an honest moment about cost. "When I see how many tokens it takes," Kathy said, "it's like, oh, maybe it's not free." It's not. And the economics of AI-assisted workflows, especially at scale, are still being worked out.

The counterintuitive shift: AI is making in-person more valuable, not less

Both Lisa and Colleen arrived at the same observation from different angles, and it's one of the more interesting things to come out of the conversation.

As AI-generated content floods digital channels, people are seeking out human connection more, not less. Colleen spent four years in a field leadership role at Qlik before her current role, and she watched it happen in real time.

"The more AI content that they were seeing in their feeds, I found that people were seeking out trusted, one-on-one human connection and relationship," she said. "And that human connection was really happening in in-person events."

Not large conferences necessarily — the format that's working is smaller, more intentional. Community dinners. Roundtables. Recurring gatherings where people build trust over time. Colleen described it as a bell curve: investment growing at both ends — more AI-assisted digital engagement and more high-touch in-person — with the middle (hybrid, generic) becoming less effective.

Lisa had to make this case to her board directly. The attribution model was giving digital more credit, and there was pressure to cut in-person investment accordingly. Her argument: buyers spend 70% of the purchasing cycle doing their own research. By the time they want to talk to someone, they want to talk to a person, and the memory of a real experience — a roundtable, a dinner, a conversation with a customer peer — influences that decision in ways the attribution model doesn't capture.

"It can't be one or the other," she said. "It's how do we pull the levers in different ways but not take our foot off one lever."

What they'd tell you to do in the next ninety days

Kathy asked both of them to leave the audience with one thing. Here's what they said.

Lisa: "Committee is really important to help get the whole team involved. And freedom to explore — not everything has to work. Try pilots. If they don't work, great. Move on. If they do work, rinse and repeat."

Colleen: "Have a plan. Experimentation is incredibly important. But having a plan to actually turn that experimentation into something broader is so important. Find a couple things that work. Make those bigger. How do you lean in, expand, grow from there? Move from experimentation to something that gets operationalized."

Both answers point at the same underlying truth: the teams making real progress with AI aren't the ones moving fastest. They're the ones moving with intention — one use case at a time, with humans still in the loop at the places that matter, and a plan for how experiments become operating reality.

The messy middle isn't a failure state. It's where the actual work happens.

Read Inverta's AI manifesto for the bigger decisions and approaches marketing leaders need to make as they move through this transition. Download it at inverta.com.

About the author
With 25 years in sales, marketing, and IT, this ITSMA-certified ABM practitioner co-founded Inverta to consult with top companies on marketing transformation.
Service page feature

Artificial intelligence

Don’t feel behind, we’re all in this together. There are eight types of AI marketing pilots we're running with dozens of clients help them shortcut the hype and prove real value.
Learn how we help

If you believed everything you read on LinkedIn, you'd think you missed the boat.

Every other post is a CMO announcing their fully AI-integrated marketing function, their agentic workflows humming along, their team operating at twice the output with half the headcount. It reads clean. It reads confident. And according to the two marketing leaders Inverta sat down with for this conversation, it's mostly fiction.

"Whoever says they're an AI expert is lying," said Lisa Harrup Mieuli, CMO at Gigamon. "We're all learning."

That honesty is exactly what made this conversation worth having. Lisa and Colleen Goldblatt, Senior Director of Global Marketing Programs at Qlik, spent an hour with Inverta's Kathy Macchi talking about where their teams actually are with AI — the experiments that worked, the ones that didn't, the governance structures they've built, and the counterintuitive conclusion both of them have reached about where AI is pushing marketing next.

Here's what they said.

The forcing function: when AI becomes a company priority, not just a team experiment

Both Lisa and Colleen had a version of the same story. Their teams started experimenting with AI individually — content generation here, a workflow shortcut there — until it became clear that scattered experimentation wasn't the same as organizational progress.

At Gigamon, Lisa's team created what she called an "AI tiger team" inside the marketing department. Rather than asking everyone to figure it out on their own, they defined specific use cases, brought in an external AI expert for organization-wide training, and moved through use cases one at a time. Content first. Then personalization for the BDR team. Then campaign execution speed.

"If you try and do everything, then you can't be successful at anything," Lisa said. "One step at a time."

The other key move: making AI optimization an explicit company objective — one of four "big rocks" for the year. That framing did two things. It gave every team member permission to spend time on AI work without feeling like they were dropping other priorities. And it created a visible mandate that the board could track.

At Qlik, the path to governance looked different. Colleen described watching AI get adopted simultaneously across every function — marketing, sales, customer success, finance — with each team making independent choices about tools, data access, and use cases. The cross-functional AI council wasn't just about marketing. It was about preventing an organization-wide governance gap before it became a security or cost problem.

"All of a sudden, I have access to the CS team's data, or finance has access to everything," Colleen said. "How do you start to govern that? How do you open that up in ways that are fit for purpose?"

The council addressed two things at once: the structural question of who has access to what, and the skill set question of whether teams actually have the discernment to evaluate what AI is giving them. "If we couldn't do that," Colleen said, "then we were scaling risk in our business, not scaling our business."

What the board actually means when they say "do more with AI"

Both Lisa and Colleen have had versions of the same board conversation, and both were candid about the gap between what boards ask for and what they understand.

Lisa's read: boards see cost, efficiency, and headcount reduction. They want to know if AI can do the work of people who would otherwise need to be hired. The job of the CMO is to reframe that conversation — to show AI as an efficiency multiplier that lets existing teams do more strategic work, rather than a replacement for the team itself.

Each board meeting at Gigamon now includes an update on where AI is being implemented, where it's working, and where it hasn't. That transparency, Lisa said, is what keeps the expectation conversation honest.

Colleen's read: the gap is less about intention and more about understanding. "The models are lovely," she said. "The maturity curves are nice and clean and tight and tidy. But when you get down into what it actually takes to make this work, understanding your systems and how healthy your systems are is so incredibly important."

By systems, she means everything from data cleanliness to process definition. Clean brand guidelines. Structured product documentation. Workflows that are actually documented rather than living in someone's head or getting resolved by committee in a meeting. The assumption that these things are ready is usually wrong. The work of getting them ready is significant, and it's work that doesn't show up neatly on a board slide.

Lisa added one more dimension that boards tend to underestimate: brand protection. The risk isn't just that AI produces inaccurate content. It's that AI produces content that reads like AI — content your customers can clock instantly. "You need to protect your tone of voice," she said. "Your customers can believe and not instantly say, oh, AI produced that."

Colleen's counter-observation: if the inputs are clean enough, AI can actually improve content consistency. The challenge is that getting inputs to that level of cleanliness is exactly the hard work most teams haven't done yet.

What's actually working

The most useful part of the conversation was the specifics. Not frameworks — actual examples of what they've tried and what happened.

Colleen's event audience acquisition strategy

Colleen's team used ChatGPT as a thought partner to build a six-month event audience acquisition strategy — from scratch, in roughly a day.

The approach: upload everything relevant first. Event details, messaging frameworks, persona targets, registration goals, last year's data, previous email performance. Give the model full context before asking it to do anything.

Then build the strategy iteratively. What does each audience segment need to see before they'll register? At what point in the six-month runway does each piece of content land? What does the weekly messaging calendar look like by channel? What's the message permutation for each audience in each week?

Once the strategy was built, she asked ChatGPT to format the tracking structure in Smartsheet — and every channel involved in the event could start planning against a shared framework, well ahead of when they'd normally be scrambling.

"Doing that work took me about a day," Colleen said. "Building that strategy in previous years would have been ad hoc over six months."

Lisa's content workflow

Gigamon's content team has built agents that move content from a source document — a white paper, a messaging doc — to multiple finished formats without each conversion requiring a separate human briefing. A blog post, a social variant, a campaign email can all flow from a single approved source.

The human stays in the loop at the checkpoints that matter: reviewing the source messaging doc before it goes into the workflow, and reviewing the outputs before they go live. What's changed is the speed between those checkpoints.

"The speed to content and the speed to campaign execution has just been extremely faster," Lisa said.

The BDR team is seeing similar gains. Personalization at scale — understanding an account's business context well enough to write a genuinely relevant outreach — used to be a luxury reserved for one-to-one ABM programs. Now it's closer to a standard workflow capability.

Where it hasn't worked yet

Kathy asked both of them to be honest about where they've tried something and it didn't land. The answers were instructive.

Colleen pointed to AI-generated video. Not a failure, exactly — but a lesson in how much human review the medium still requires. Her creative colleague spent a day combing through finished video, checking every mirror reflection and window in the background for unexpected hallucinations. "I wouldn't say it didn't work," Colleen said. "I just don't leave it alone yet."

She raised a related concern that's easy to miss in the efficiency narrative: as AI speeds up production, it creates new bottlenecks downstream. More content going to legal for review. Legal teams not staffed for the volume. The humans at the checkpoints getting more overwhelmed, not less. "How are you protecting those humans?" she asked. "How are you reducing workload there?"

There was also an honest moment about cost. "When I see how many tokens it takes," Kathy said, "it's like, oh, maybe it's not free." It's not. And the economics of AI-assisted workflows, especially at scale, are still being worked out.

The counterintuitive shift: AI is making in-person more valuable, not less

Both Lisa and Colleen arrived at the same observation from different angles, and it's one of the more interesting things to come out of the conversation.

As AI-generated content floods digital channels, people are seeking out human connection more, not less. Colleen spent four years in a field leadership role at Qlik before her current role, and she watched it happen in real time.

"The more AI content that they were seeing in their feeds, I found that people were seeking out trusted, one-on-one human connection and relationship," she said. "And that human connection was really happening in in-person events."

Not large conferences necessarily — the format that's working is smaller, more intentional. Community dinners. Roundtables. Recurring gatherings where people build trust over time. Colleen described it as a bell curve: investment growing at both ends — more AI-assisted digital engagement and more high-touch in-person — with the middle (hybrid, generic) becoming less effective.

Lisa had to make this case to her board directly. The attribution model was giving digital more credit, and there was pressure to cut in-person investment accordingly. Her argument: buyers spend 70% of the purchasing cycle doing their own research. By the time they want to talk to someone, they want to talk to a person, and the memory of a real experience — a roundtable, a dinner, a conversation with a customer peer — influences that decision in ways the attribution model doesn't capture.

"It can't be one or the other," she said. "It's how do we pull the levers in different ways but not take our foot off one lever."

What they'd tell you to do in the next ninety days

Kathy asked both of them to leave the audience with one thing. Here's what they said.

Lisa: "Committee is really important to help get the whole team involved. And freedom to explore — not everything has to work. Try pilots. If they don't work, great. Move on. If they do work, rinse and repeat."

Colleen: "Have a plan. Experimentation is incredibly important. But having a plan to actually turn that experimentation into something broader is so important. Find a couple things that work. Make those bigger. How do you lean in, expand, grow from there? Move from experimentation to something that gets operationalized."

Both answers point at the same underlying truth: the teams making real progress with AI aren't the ones moving fastest. They're the ones moving with intention — one use case at a time, with humans still in the loop at the places that matter, and a plan for how experiments become operating reality.

The messy middle isn't a failure state. It's where the actual work happens.

Read Inverta's AI manifesto for the bigger decisions and approaches marketing leaders need to make as they move through this transition. Download it at inverta.com.

Resources
No items found.
About the author
With 25 years in sales, marketing, and IT, this ITSMA-certified ABM practitioner co-founded Inverta to consult with top companies on marketing transformation.
Service page feature

Artificial intelligence

Don’t feel behind, we’re all in this together. There are eight types of AI marketing pilots we're running with dozens of clients help them shortcut the hype and prove real value.
Learn how we help
Webinar
|
Artificial intelligence

Nobody has AI figured out. Here's what it actually looks like inside two marketing teams that are trying.

On-demand

No items found.
Return to resources
April 23, 2026
Speakers
Other helpful resources
No items found.

If you believed everything you read on LinkedIn, you'd think you missed the boat.

Every other post is a CMO announcing their fully AI-integrated marketing function, their agentic workflows humming along, their team operating at twice the output with half the headcount. It reads clean. It reads confident. And according to the two marketing leaders Inverta sat down with for this conversation, it's mostly fiction.

"Whoever says they're an AI expert is lying," said Lisa Harrup Mieuli, CMO at Gigamon. "We're all learning."

That honesty is exactly what made this conversation worth having. Lisa and Colleen Goldblatt, Senior Director of Global Marketing Programs at Qlik, spent an hour with Inverta's Kathy Macchi talking about where their teams actually are with AI — the experiments that worked, the ones that didn't, the governance structures they've built, and the counterintuitive conclusion both of them have reached about where AI is pushing marketing next.

Here's what they said.

The forcing function: when AI becomes a company priority, not just a team experiment

Both Lisa and Colleen had a version of the same story. Their teams started experimenting with AI individually — content generation here, a workflow shortcut there — until it became clear that scattered experimentation wasn't the same as organizational progress.

At Gigamon, Lisa's team created what she called an "AI tiger team" inside the marketing department. Rather than asking everyone to figure it out on their own, they defined specific use cases, brought in an external AI expert for organization-wide training, and moved through use cases one at a time. Content first. Then personalization for the BDR team. Then campaign execution speed.

"If you try and do everything, then you can't be successful at anything," Lisa said. "One step at a time."

The other key move: making AI optimization an explicit company objective — one of four "big rocks" for the year. That framing did two things. It gave every team member permission to spend time on AI work without feeling like they were dropping other priorities. And it created a visible mandate that the board could track.

At Qlik, the path to governance looked different. Colleen described watching AI get adopted simultaneously across every function — marketing, sales, customer success, finance — with each team making independent choices about tools, data access, and use cases. The cross-functional AI council wasn't just about marketing. It was about preventing an organization-wide governance gap before it became a security or cost problem.

"All of a sudden, I have access to the CS team's data, or finance has access to everything," Colleen said. "How do you start to govern that? How do you open that up in ways that are fit for purpose?"

The council addressed two things at once: the structural question of who has access to what, and the skill set question of whether teams actually have the discernment to evaluate what AI is giving them. "If we couldn't do that," Colleen said, "then we were scaling risk in our business, not scaling our business."

What the board actually means when they say "do more with AI"

Both Lisa and Colleen have had versions of the same board conversation, and both were candid about the gap between what boards ask for and what they understand.

Lisa's read: boards see cost, efficiency, and headcount reduction. They want to know if AI can do the work of people who would otherwise need to be hired. The job of the CMO is to reframe that conversation — to show AI as an efficiency multiplier that lets existing teams do more strategic work, rather than a replacement for the team itself.

Each board meeting at Gigamon now includes an update on where AI is being implemented, where it's working, and where it hasn't. That transparency, Lisa said, is what keeps the expectation conversation honest.

Colleen's read: the gap is less about intention and more about understanding. "The models are lovely," she said. "The maturity curves are nice and clean and tight and tidy. But when you get down into what it actually takes to make this work, understanding your systems and how healthy your systems are is so incredibly important."

By systems, she means everything from data cleanliness to process definition. Clean brand guidelines. Structured product documentation. Workflows that are actually documented rather than living in someone's head or getting resolved by committee in a meeting. The assumption that these things are ready is usually wrong. The work of getting them ready is significant, and it's work that doesn't show up neatly on a board slide.

Lisa added one more dimension that boards tend to underestimate: brand protection. The risk isn't just that AI produces inaccurate content. It's that AI produces content that reads like AI — content your customers can clock instantly. "You need to protect your tone of voice," she said. "Your customers can believe and not instantly say, oh, AI produced that."

Colleen's counter-observation: if the inputs are clean enough, AI can actually improve content consistency. The challenge is that getting inputs to that level of cleanliness is exactly the hard work most teams haven't done yet.

What's actually working

The most useful part of the conversation was the specifics. Not frameworks — actual examples of what they've tried and what happened.

Colleen's event audience acquisition strategy

Colleen's team used ChatGPT as a thought partner to build a six-month event audience acquisition strategy — from scratch, in roughly a day.

The approach: upload everything relevant first. Event details, messaging frameworks, persona targets, registration goals, last year's data, previous email performance. Give the model full context before asking it to do anything.

Then build the strategy iteratively. What does each audience segment need to see before they'll register? At what point in the six-month runway does each piece of content land? What does the weekly messaging calendar look like by channel? What's the message permutation for each audience in each week?

Once the strategy was built, she asked ChatGPT to format the tracking structure in Smartsheet — and every channel involved in the event could start planning against a shared framework, well ahead of when they'd normally be scrambling.

"Doing that work took me about a day," Colleen said. "Building that strategy in previous years would have been ad hoc over six months."

Lisa's content workflow

Gigamon's content team has built agents that move content from a source document — a white paper, a messaging doc — to multiple finished formats without each conversion requiring a separate human briefing. A blog post, a social variant, a campaign email can all flow from a single approved source.

The human stays in the loop at the checkpoints that matter: reviewing the source messaging doc before it goes into the workflow, and reviewing the outputs before they go live. What's changed is the speed between those checkpoints.

"The speed to content and the speed to campaign execution has just been extremely faster," Lisa said.

The BDR team is seeing similar gains. Personalization at scale — understanding an account's business context well enough to write a genuinely relevant outreach — used to be a luxury reserved for one-to-one ABM programs. Now it's closer to a standard workflow capability.

Where it hasn't worked yet

Kathy asked both of them to be honest about where they've tried something and it didn't land. The answers were instructive.

Colleen pointed to AI-generated video. Not a failure, exactly — but a lesson in how much human review the medium still requires. Her creative colleague spent a day combing through finished video, checking every mirror reflection and window in the background for unexpected hallucinations. "I wouldn't say it didn't work," Colleen said. "I just don't leave it alone yet."

She raised a related concern that's easy to miss in the efficiency narrative: as AI speeds up production, it creates new bottlenecks downstream. More content going to legal for review. Legal teams not staffed for the volume. The humans at the checkpoints getting more overwhelmed, not less. "How are you protecting those humans?" she asked. "How are you reducing workload there?"

There was also an honest moment about cost. "When I see how many tokens it takes," Kathy said, "it's like, oh, maybe it's not free." It's not. And the economics of AI-assisted workflows, especially at scale, are still being worked out.

The counterintuitive shift: AI is making in-person more valuable, not less

Both Lisa and Colleen arrived at the same observation from different angles, and it's one of the more interesting things to come out of the conversation.

As AI-generated content floods digital channels, people are seeking out human connection more, not less. Colleen spent four years in a field leadership role at Qlik before her current role, and she watched it happen in real time.

"The more AI content that they were seeing in their feeds, I found that people were seeking out trusted, one-on-one human connection and relationship," she said. "And that human connection was really happening in in-person events."

Not large conferences necessarily — the format that's working is smaller, more intentional. Community dinners. Roundtables. Recurring gatherings where people build trust over time. Colleen described it as a bell curve: investment growing at both ends — more AI-assisted digital engagement and more high-touch in-person — with the middle (hybrid, generic) becoming less effective.

Lisa had to make this case to her board directly. The attribution model was giving digital more credit, and there was pressure to cut in-person investment accordingly. Her argument: buyers spend 70% of the purchasing cycle doing their own research. By the time they want to talk to someone, they want to talk to a person, and the memory of a real experience — a roundtable, a dinner, a conversation with a customer peer — influences that decision in ways the attribution model doesn't capture.

"It can't be one or the other," she said. "It's how do we pull the levers in different ways but not take our foot off one lever."

What they'd tell you to do in the next ninety days

Kathy asked both of them to leave the audience with one thing. Here's what they said.

Lisa: "Committee is really important to help get the whole team involved. And freedom to explore — not everything has to work. Try pilots. If they don't work, great. Move on. If they do work, rinse and repeat."

Colleen: "Have a plan. Experimentation is incredibly important. But having a plan to actually turn that experimentation into something broader is so important. Find a couple things that work. Make those bigger. How do you lean in, expand, grow from there? Move from experimentation to something that gets operationalized."

Both answers point at the same underlying truth: the teams making real progress with AI aren't the ones moving fastest. They're the ones moving with intention — one use case at a time, with humans still in the loop at the places that matter, and a plan for how experiments become operating reality.

The messy middle isn't a failure state. It's where the actual work happens.

Read Inverta's AI manifesto for the bigger decisions and approaches marketing leaders need to make as they move through this transition. Download it at inverta.com.

About the author
With 25 years in sales, marketing, and IT, this ITSMA-certified ABM practitioner co-founded Inverta to consult with top companies on marketing transformation.
Service page feature

Artificial intelligence

Don’t feel behind, we’re all in this together. There are eight types of AI marketing pilots we're running with dozens of clients help them shortcut the hype and prove real value.
Learn how we help

Back to the top