Don't let your AI strategy cost you deals | March AI tips

How are you going to stand out in 2026? 

Buyers are in control of the buying process. Over 70% of it is happening online. And they are putting companies on their short list of vendors that provide value and establish trust with their content.

How are you accounting for this in your 2026 strategy?

You need to stand out. To make sure your buyers notice you. Let us help you build that into your strategy for next year -->

Industry
Size
Location
Solution

The Revenue Marketer

Article
|
Artificial intelligence

Still chasing 10% efficiency gains?

2023-2025 were the years of experimentation with AI. 2026 is the year to make it transformative. If you're still trying to figure out the strategic advantage AI will bring to your team, we're here to help!

If you’re still basing your entire B2B marketing strategy on clicks, MQLs, and that old-school SEO playbook, you have a blind spot, and it’s costing you more deals than you realize.  This month’s issue continues to explore themes touched on last month (adjusting your content for an agentic buyer), but before we get too far out over our skis, let’s step back and address some major operational problems contributing to our inability to be present for our buyers.

Inside this issue, we lay out the five immediate steps you can take this week to start building the organization and systems we need to work with AI: training tailored to your team’s needs, building smarter AI project briefs that keep you focused on meaningful work, automations that don’t needlessly blow your budget, and optimizing for our new buyers.

Generic AI Training Is Burning Budget Without Moving the Needle

Most organizations responding to the AI adoption mandate are running broad tool demos and lunch-and-learns without first diagnosing where the actual skill gaps sit. Trust Insights’ February research identifies the consistent failure pattern: teams skip the proficiency assessment step, so half the team sits through training they’ve already mastered, while team members who need role-specific workflow guidance get nothing relevant. AI upskilling budgets are growing. Generic programs produce generic adoption rates, which means spending real money to marginally change behavior for a few early adopters rather than shifting how the whole team works. The teams seeing measurable productivity gains mapped their gaps before designing training.

What to do today

Run a 10-minute proficiency survey before your next training investment. Set up a Google Form or Typeform asking three questions: how your team currently uses AI in their work, which time-consuming tasks remain unautomated, and where AI attempts have failed or produced unusable output. Paste all responses into Claude and ask it to categorize gaps by role and surface the top three workflow training priorities. Share that output with your learning vendor or use it to build a targeted internal curriculum. A two-hour workshop built around specific gaps will outperform a full-day generic training on adoption rates.

Source: Trust Insights, INBOX INSIGHTS: Why Generic AI Training Fails (February 11, 2026)

Your AI Project Planning Is Backwards, and That’s Why Execution Stall

Most marketing teams treat AI projects the way they treat software requests: gather requirements, hand off to someone technical, wait for output. Christopher Penn’s February analysis of dozens of AI projects points to a cleaner model. The ratio that predicts success is roughly 50/50: for every hour spent on execution, spend an equal amount on planning, defining clear objectives, identifying the people involved, mapping the process, selecting the platform, and specifying how you’ll measure performance. Agentic AI systems fail not because the technology is insufficient, but because the instructions are ambiguous. Vague briefs produce vague outputs, and teams blame the tool when the problem is the setup.

What to do today

Take one stalled or underperforming AI project your team has deprioritized. Before touching the tool again, write a single-page brief answering five questions: What is the specific output we need? Who is responsible for each step? What does the process look like from input to output? Which platform will handle each part? How will we know it worked? Paste that brief into Claude and ask it to identify ambiguities or missing information before you build anything. Teams that front-load this planning step consistently get to working outputs faster than teams that iterate by trial and error.

Source: Almost Timely News: How I Think About Building with AI (2026-02-15)

Open Weights AI Models Cut Tooling Costs by 95% for Routine Marketing Work

Your team is likely paying premium rates for closed AI models when open weights alternatives handle most marketing tasks at a fraction of the cost. Christopher Penn’s February benchmarking puts the numbers plainly: models like GLM-5 and Minimax-M2.5 run at $0.27 to $0.80 per million input tokens, compared to $5 per million for Claude Opus 4.5 and $2 for Gemini 3. For high-volume, routine tasks (email drafts, social variations, research summaries, first-pass ad copy) the performance difference is marginal and the cost difference is 20x. Hosted inference providers such as DeepInfra, Together AI, and Fireworks AI make these models accessible without any infrastructure requirements, and most offer zero-data-retention policies that address the legal concerns your team raises about running proprietary content through commercial AI tools.

What to do today

Identify the three tasks your team runs through AI most frequently this week. Run the same prompts through a hosted open weights model on DeepInfra or Together AI alongside your current tool and compare the outputs side by side. If quality is comparable, start shifting those workflows. Track your monthly AI spend in a simple spreadsheet and log the delta over 60 days. Teams running significant content volume can reallocate the savings to higher-value tools or apply them toward the targeted AI training described above.

Source: Almost Timely News: How To Get Started with Hosted Open Weights AI (2026-02-22)

LinkedIn Lost 60% of B2B Traffic. Your Pipeline Has a Blind Spot.

LinkedIn disclosed in late January that awareness-driven B2B traffic to their web properties dropped by up to 60%. Rankings held steady. Clicks collapsed. Google’s AI Overviews now reduce click-through rates for top-ranking results by 58%, nearly double the rate measured eight months earlier, and the pace is accelerating. Your buyers are still researching vendors; they’re doing it inside ChatGPT, Gemini, and Perplexity without ever landing on your website. LinkedIn, the second-most-cited domain in AI-generated responses, has abandoned traditional SEO metrics entirely in favor of a new framework: be seen, be mentioned, be considered, be chosen. The companies getting found are the ones whose content AI systems can parse, trust, and quote. The rest are invisible.

What to do today

Find your five highest-traffic LinkedIn articles and your top three website pages from the last quarter. Rewrite this to directly answer the question a B2B buyer would type into ChatGPT when researching your category. Use specific data points, clear headings, and a direct conclusion (you can use Claude, Gemini, or ChatGPT to help) Publish the revised versions this week. Then search your top category keywords in ChatGPT and Perplexity and check whether your brand appears in the responses. If it doesn’t, you have a gap that’s costing you inbound deals you can’t measure.

Source: LinkedIn: AI-powered search cut traffic by up to 60% | LinkedIn abandons traditional SEO as 60% traffic loss forces radical strategy shift

24% of Buyers Use AI to Make Purchase Decisions, and They’re Not Filling Out Your Forms

Kantar’s 2026 data shows 24% of AI users now rely on an AI assistant to make purchasing decisions on their behalf. A 2025 survey found 29% of buyers treat AI as their primary discovery channel, bypassing search engines entirely. For brands optimized for AI search, AI-driven discovery accounts for roughly 40% of inbound leads while driving only 20% of total traffic. Your content was built for humans who click, read, and submit forms. Machine buyers synthesize structured information across multiple sources and surface vendor recommendations based on what they can parse and verify. Brands without clear, specific, consistently formatted content are getting screened out before a human ever sees the shortlist, and that attrition doesn’t show up in your analytics.

What to do today

Pull last quarter’s lost deals from Salesforce and paste the CRM notes into Claude. Ask: “What questions would a buyer likely research in ChatGPT before contacting this company? What would the AI recommend based on publicly available information about this vendor?” Use the output to identify your top three content gaps. Then create short, Q&A-formatted pages on your website that answer those questions directly. Each page takes under an hour to draft with Claude and directly improves your odds of appearing in AI-generated vendor shortlists.

Source: Digital Marketing News: Feb 20–March 1, 2026 | LinkedIn’s AI Search Visibility Data: What 60% Traffic Loss Reveals

Did you miss February's newsletter? Check it out here! 

About the author
As Inverta's AI Practice Lead, he draws on deep B2B marketing automation expertise to help clients solve their most complex customer problems.
Service page feature

Artificial intelligence

Don’t feel behind, we’re all in this together. There are eight types of AI marketing pilots we're running with dozens of clients help them shortcut the hype and prove real value.
Learn how we help

If you’re still basing your entire B2B marketing strategy on clicks, MQLs, and that old-school SEO playbook, you have a blind spot, and it’s costing you more deals than you realize.  This month’s issue continues to explore themes touched on last month (adjusting your content for an agentic buyer), but before we get too far out over our skis, let’s step back and address some major operational problems contributing to our inability to be present for our buyers.

Inside this issue, we lay out the five immediate steps you can take this week to start building the organization and systems we need to work with AI: training tailored to your team’s needs, building smarter AI project briefs that keep you focused on meaningful work, automations that don’t needlessly blow your budget, and optimizing for our new buyers.

Generic AI Training Is Burning Budget Without Moving the Needle

Most organizations responding to the AI adoption mandate are running broad tool demos and lunch-and-learns without first diagnosing where the actual skill gaps sit. Trust Insights’ February research identifies the consistent failure pattern: teams skip the proficiency assessment step, so half the team sits through training they’ve already mastered, while team members who need role-specific workflow guidance get nothing relevant. AI upskilling budgets are growing. Generic programs produce generic adoption rates, which means spending real money to marginally change behavior for a few early adopters rather than shifting how the whole team works. The teams seeing measurable productivity gains mapped their gaps before designing training.

What to do today

Run a 10-minute proficiency survey before your next training investment. Set up a Google Form or Typeform asking three questions: how your team currently uses AI in their work, which time-consuming tasks remain unautomated, and where AI attempts have failed or produced unusable output. Paste all responses into Claude and ask it to categorize gaps by role and surface the top three workflow training priorities. Share that output with your learning vendor or use it to build a targeted internal curriculum. A two-hour workshop built around specific gaps will outperform a full-day generic training on adoption rates.

Source: Trust Insights, INBOX INSIGHTS: Why Generic AI Training Fails (February 11, 2026)

Your AI Project Planning Is Backwards, and That’s Why Execution Stall

Most marketing teams treat AI projects the way they treat software requests: gather requirements, hand off to someone technical, wait for output. Christopher Penn’s February analysis of dozens of AI projects points to a cleaner model. The ratio that predicts success is roughly 50/50: for every hour spent on execution, spend an equal amount on planning, defining clear objectives, identifying the people involved, mapping the process, selecting the platform, and specifying how you’ll measure performance. Agentic AI systems fail not because the technology is insufficient, but because the instructions are ambiguous. Vague briefs produce vague outputs, and teams blame the tool when the problem is the setup.

What to do today

Take one stalled or underperforming AI project your team has deprioritized. Before touching the tool again, write a single-page brief answering five questions: What is the specific output we need? Who is responsible for each step? What does the process look like from input to output? Which platform will handle each part? How will we know it worked? Paste that brief into Claude and ask it to identify ambiguities or missing information before you build anything. Teams that front-load this planning step consistently get to working outputs faster than teams that iterate by trial and error.

Source: Almost Timely News: How I Think About Building with AI (2026-02-15)

Open Weights AI Models Cut Tooling Costs by 95% for Routine Marketing Work

Your team is likely paying premium rates for closed AI models when open weights alternatives handle most marketing tasks at a fraction of the cost. Christopher Penn’s February benchmarking puts the numbers plainly: models like GLM-5 and Minimax-M2.5 run at $0.27 to $0.80 per million input tokens, compared to $5 per million for Claude Opus 4.5 and $2 for Gemini 3. For high-volume, routine tasks (email drafts, social variations, research summaries, first-pass ad copy) the performance difference is marginal and the cost difference is 20x. Hosted inference providers such as DeepInfra, Together AI, and Fireworks AI make these models accessible without any infrastructure requirements, and most offer zero-data-retention policies that address the legal concerns your team raises about running proprietary content through commercial AI tools.

What to do today

Identify the three tasks your team runs through AI most frequently this week. Run the same prompts through a hosted open weights model on DeepInfra or Together AI alongside your current tool and compare the outputs side by side. If quality is comparable, start shifting those workflows. Track your monthly AI spend in a simple spreadsheet and log the delta over 60 days. Teams running significant content volume can reallocate the savings to higher-value tools or apply them toward the targeted AI training described above.

Source: Almost Timely News: How To Get Started with Hosted Open Weights AI (2026-02-22)

LinkedIn Lost 60% of B2B Traffic. Your Pipeline Has a Blind Spot.

LinkedIn disclosed in late January that awareness-driven B2B traffic to their web properties dropped by up to 60%. Rankings held steady. Clicks collapsed. Google’s AI Overviews now reduce click-through rates for top-ranking results by 58%, nearly double the rate measured eight months earlier, and the pace is accelerating. Your buyers are still researching vendors; they’re doing it inside ChatGPT, Gemini, and Perplexity without ever landing on your website. LinkedIn, the second-most-cited domain in AI-generated responses, has abandoned traditional SEO metrics entirely in favor of a new framework: be seen, be mentioned, be considered, be chosen. The companies getting found are the ones whose content AI systems can parse, trust, and quote. The rest are invisible.

What to do today

Find your five highest-traffic LinkedIn articles and your top three website pages from the last quarter. Rewrite this to directly answer the question a B2B buyer would type into ChatGPT when researching your category. Use specific data points, clear headings, and a direct conclusion (you can use Claude, Gemini, or ChatGPT to help) Publish the revised versions this week. Then search your top category keywords in ChatGPT and Perplexity and check whether your brand appears in the responses. If it doesn’t, you have a gap that’s costing you inbound deals you can’t measure.

Source: LinkedIn: AI-powered search cut traffic by up to 60% | LinkedIn abandons traditional SEO as 60% traffic loss forces radical strategy shift

24% of Buyers Use AI to Make Purchase Decisions, and They’re Not Filling Out Your Forms

Kantar’s 2026 data shows 24% of AI users now rely on an AI assistant to make purchasing decisions on their behalf. A 2025 survey found 29% of buyers treat AI as their primary discovery channel, bypassing search engines entirely. For brands optimized for AI search, AI-driven discovery accounts for roughly 40% of inbound leads while driving only 20% of total traffic. Your content was built for humans who click, read, and submit forms. Machine buyers synthesize structured information across multiple sources and surface vendor recommendations based on what they can parse and verify. Brands without clear, specific, consistently formatted content are getting screened out before a human ever sees the shortlist, and that attrition doesn’t show up in your analytics.

What to do today

Pull last quarter’s lost deals from Salesforce and paste the CRM notes into Claude. Ask: “What questions would a buyer likely research in ChatGPT before contacting this company? What would the AI recommend based on publicly available information about this vendor?” Use the output to identify your top three content gaps. Then create short, Q&A-formatted pages on your website that answer those questions directly. Each page takes under an hour to draft with Claude and directly improves your odds of appearing in AI-generated vendor shortlists.

Source: Digital Marketing News: Feb 20–March 1, 2026 | LinkedIn’s AI Search Visibility Data: What 60% Traffic Loss Reveals

Did you miss February's newsletter? Check it out here! 

Resources
No items found.
About the author
As Inverta's AI Practice Lead, he draws on deep B2B marketing automation expertise to help clients solve their most complex customer problems.
Service page feature

Artificial intelligence

Don’t feel behind, we’re all in this together. There are eight types of AI marketing pilots we're running with dozens of clients help them shortcut the hype and prove real value.
Learn how we help
Article
|
Artificial intelligence

Don't let your AI strategy cost you deals | March AI tips

On-demand

No items found.
Return to resources
March 13, 2026
Speakers
Other helpful resources
No items found.

If you’re still basing your entire B2B marketing strategy on clicks, MQLs, and that old-school SEO playbook, you have a blind spot, and it’s costing you more deals than you realize.  This month’s issue continues to explore themes touched on last month (adjusting your content for an agentic buyer), but before we get too far out over our skis, let’s step back and address some major operational problems contributing to our inability to be present for our buyers.

Inside this issue, we lay out the five immediate steps you can take this week to start building the organization and systems we need to work with AI: training tailored to your team’s needs, building smarter AI project briefs that keep you focused on meaningful work, automations that don’t needlessly blow your budget, and optimizing for our new buyers.

Generic AI Training Is Burning Budget Without Moving the Needle

Most organizations responding to the AI adoption mandate are running broad tool demos and lunch-and-learns without first diagnosing where the actual skill gaps sit. Trust Insights’ February research identifies the consistent failure pattern: teams skip the proficiency assessment step, so half the team sits through training they’ve already mastered, while team members who need role-specific workflow guidance get nothing relevant. AI upskilling budgets are growing. Generic programs produce generic adoption rates, which means spending real money to marginally change behavior for a few early adopters rather than shifting how the whole team works. The teams seeing measurable productivity gains mapped their gaps before designing training.

What to do today

Run a 10-minute proficiency survey before your next training investment. Set up a Google Form or Typeform asking three questions: how your team currently uses AI in their work, which time-consuming tasks remain unautomated, and where AI attempts have failed or produced unusable output. Paste all responses into Claude and ask it to categorize gaps by role and surface the top three workflow training priorities. Share that output with your learning vendor or use it to build a targeted internal curriculum. A two-hour workshop built around specific gaps will outperform a full-day generic training on adoption rates.

Source: Trust Insights, INBOX INSIGHTS: Why Generic AI Training Fails (February 11, 2026)

Your AI Project Planning Is Backwards, and That’s Why Execution Stall

Most marketing teams treat AI projects the way they treat software requests: gather requirements, hand off to someone technical, wait for output. Christopher Penn’s February analysis of dozens of AI projects points to a cleaner model. The ratio that predicts success is roughly 50/50: for every hour spent on execution, spend an equal amount on planning, defining clear objectives, identifying the people involved, mapping the process, selecting the platform, and specifying how you’ll measure performance. Agentic AI systems fail not because the technology is insufficient, but because the instructions are ambiguous. Vague briefs produce vague outputs, and teams blame the tool when the problem is the setup.

What to do today

Take one stalled or underperforming AI project your team has deprioritized. Before touching the tool again, write a single-page brief answering five questions: What is the specific output we need? Who is responsible for each step? What does the process look like from input to output? Which platform will handle each part? How will we know it worked? Paste that brief into Claude and ask it to identify ambiguities or missing information before you build anything. Teams that front-load this planning step consistently get to working outputs faster than teams that iterate by trial and error.

Source: Almost Timely News: How I Think About Building with AI (2026-02-15)

Open Weights AI Models Cut Tooling Costs by 95% for Routine Marketing Work

Your team is likely paying premium rates for closed AI models when open weights alternatives handle most marketing tasks at a fraction of the cost. Christopher Penn’s February benchmarking puts the numbers plainly: models like GLM-5 and Minimax-M2.5 run at $0.27 to $0.80 per million input tokens, compared to $5 per million for Claude Opus 4.5 and $2 for Gemini 3. For high-volume, routine tasks (email drafts, social variations, research summaries, first-pass ad copy) the performance difference is marginal and the cost difference is 20x. Hosted inference providers such as DeepInfra, Together AI, and Fireworks AI make these models accessible without any infrastructure requirements, and most offer zero-data-retention policies that address the legal concerns your team raises about running proprietary content through commercial AI tools.

What to do today

Identify the three tasks your team runs through AI most frequently this week. Run the same prompts through a hosted open weights model on DeepInfra or Together AI alongside your current tool and compare the outputs side by side. If quality is comparable, start shifting those workflows. Track your monthly AI spend in a simple spreadsheet and log the delta over 60 days. Teams running significant content volume can reallocate the savings to higher-value tools or apply them toward the targeted AI training described above.

Source: Almost Timely News: How To Get Started with Hosted Open Weights AI (2026-02-22)

LinkedIn Lost 60% of B2B Traffic. Your Pipeline Has a Blind Spot.

LinkedIn disclosed in late January that awareness-driven B2B traffic to their web properties dropped by up to 60%. Rankings held steady. Clicks collapsed. Google’s AI Overviews now reduce click-through rates for top-ranking results by 58%, nearly double the rate measured eight months earlier, and the pace is accelerating. Your buyers are still researching vendors; they’re doing it inside ChatGPT, Gemini, and Perplexity without ever landing on your website. LinkedIn, the second-most-cited domain in AI-generated responses, has abandoned traditional SEO metrics entirely in favor of a new framework: be seen, be mentioned, be considered, be chosen. The companies getting found are the ones whose content AI systems can parse, trust, and quote. The rest are invisible.

What to do today

Find your five highest-traffic LinkedIn articles and your top three website pages from the last quarter. Rewrite this to directly answer the question a B2B buyer would type into ChatGPT when researching your category. Use specific data points, clear headings, and a direct conclusion (you can use Claude, Gemini, or ChatGPT to help) Publish the revised versions this week. Then search your top category keywords in ChatGPT and Perplexity and check whether your brand appears in the responses. If it doesn’t, you have a gap that’s costing you inbound deals you can’t measure.

Source: LinkedIn: AI-powered search cut traffic by up to 60% | LinkedIn abandons traditional SEO as 60% traffic loss forces radical strategy shift

24% of Buyers Use AI to Make Purchase Decisions, and They’re Not Filling Out Your Forms

Kantar’s 2026 data shows 24% of AI users now rely on an AI assistant to make purchasing decisions on their behalf. A 2025 survey found 29% of buyers treat AI as their primary discovery channel, bypassing search engines entirely. For brands optimized for AI search, AI-driven discovery accounts for roughly 40% of inbound leads while driving only 20% of total traffic. Your content was built for humans who click, read, and submit forms. Machine buyers synthesize structured information across multiple sources and surface vendor recommendations based on what they can parse and verify. Brands without clear, specific, consistently formatted content are getting screened out before a human ever sees the shortlist, and that attrition doesn’t show up in your analytics.

What to do today

Pull last quarter’s lost deals from Salesforce and paste the CRM notes into Claude. Ask: “What questions would a buyer likely research in ChatGPT before contacting this company? What would the AI recommend based on publicly available information about this vendor?” Use the output to identify your top three content gaps. Then create short, Q&A-formatted pages on your website that answer those questions directly. Each page takes under an hour to draft with Claude and directly improves your odds of appearing in AI-generated vendor shortlists.

Source: Digital Marketing News: Feb 20–March 1, 2026 | LinkedIn’s AI Search Visibility Data: What 60% Traffic Loss Reveals

Did you miss February's newsletter? Check it out here! 

About the author
As Inverta's AI Practice Lead, he draws on deep B2B marketing automation expertise to help clients solve their most complex customer problems.
Service page feature

Artificial intelligence

Don’t feel behind, we’re all in this together. There are eight types of AI marketing pilots we're running with dozens of clients help them shortcut the hype and prove real value.
Learn how we help

Back to the top