8 min read
AI Doesn't Fix a Broken Process. It Scales It.

The question I’m hearing more often now: “How do we use AI in our sales process?” Or: “We want to add AI to our customer service.” Or the most ambitious version: “We need an AI strategy.”

The question nobody is asking first: “Is our foundation ready for AI to be useful?”

This matters because AI is not magic. It’s an accelerator. It takes what exists and makes it faster, broader, or more consistent. If what exists is a well-structured process with clean data, AI accelerates something good. If what exists is a mess, AI produces the mess at scale, faster, and with more confidence than a human ever could.

I’ve spent ten years helping B2B companies fix their revenue operations. The pattern I’m seeing with AI is identical to the pattern I’ve seen with every technology wave before it. Companies want the new tool before they’ve fixed the foundation the tool sits on.

CRM didn’t fix broken sales processes. It automated them. Marketing automation didn’t fix misaligned marketing and sales teams. It let them send more emails into the same void. CPQ didn’t fix ungoverned pricing. It made it possible to generate bad quotes faster.

AI is next in that sequence. And the risk is the same, except the speed and scale of the damage are larger.

The foundation that matters

Before AI can help, three things need to be in place. Not perfect. But functional. Without them, AI is a layer of intelligence applied to a base of garbage.

A defined process.

AI can optimize a process. It can’t define one that doesn’t exist.

If your sales team doesn’t have consistent pipeline stages with clear exit criteria, AI-powered forecasting will produce confident predictions based on inconsistent data. The forecast will look sophisticated. It will be wrong.

If your support team doesn’t have a structured way of categorizing and routing tickets, an AI agent will automate the chaos. Customers will get fast responses to the wrong questions. Speed without accuracy is worse than slow and correct.

If your renewal process is “someone remembers to check the calendar,” AI can’t proactively engage at-risk accounts because there’s no system to tell it what “at risk” means. There’s no health score. There are no adoption metrics. There’s no structured data about customer outcomes. The AI has nothing to work with.

Clean, structured data.

This is the one that kills most AI initiatives before they start.

AI works on data. The quality of the output is directly proportional to the quality of the input. If your CRM has 200 custom fields and half of them are empty, the AI doesn’t have a complete picture. If reps log activities inconsistently, the AI can’t identify patterns. If your product catalog has duplicate entries and undocumented pricing rules, the AI will make recommendations based on a product architecture that doesn’t match reality.

Every empty field, every inconsistent entry, every duplicate record is noise that degrades the AI’s ability to find signal.

The data foundation for AI isn’t a big data warehouse project. It’s much simpler and much harder: do the people who use your systems enter accurate, complete, structured data consistently? If the answer is no, fix that first.

A clear definition of what “good” looks like.

AI needs an objective. Not a vague one like “improve sales efficiency.” A specific, measurable one like “reduce quote turnaround from five days to same-day” or “identify at-risk renewals 90 days before expiry” or “route inbound leads to the right rep within one hour based on ICP fit.”

Without this definition, AI becomes a solution looking for a problem. Teams experiment with tools, build prototypes, run pilots, and produce demos that impress in presentations but don’t connect to any business outcome. The pilot works. The business impact is unclear. Leadership asks “what did we get for this?” and nobody has a good answer.

The definition of “good” should come from the same discovery process I described in the problem-solving post earlier in this series. AI is a tool that can be applied to the answer. It’s not the answer itself.

Where AI actually helps in revenue operations

When the foundation is in place, AI becomes genuinely powerful. Here’s where I see it adding real value today — not in theory, but in practice.

Lead scoring and routing. If your CRM has clean data on which leads converted, which accounts expanded, and which customers churned, AI can identify patterns that humans miss. Which firmographic and behavioral combinations predict the highest conversion? These patterns exist in the data. AI surfaces them faster and more accurately than manual analysis. But only if the data is there and it’s reliable.

Forecasting. AI-powered forecasting can analyze pipeline movement patterns, deal velocity, and rep behavior to produce more accurate predictions than human judgment alone. But it needs consistent pipeline data. If reps update opportunities sporadically, the model learns the wrong patterns.

Content and communication personalization. AI can draft emails, recommend content, and personalize outreach based on account data, engagement history, and buying signals. This works well when the underlying data is rich. Without context, personalization becomes generic. “Hi [First Name], I noticed your company is in [Industry]” is what AI does with thin data. It’s what everyone else’s AI is doing too.

Support deflection and self-service. AI agents that handle routine support queries — order status, invoice questions, product information — can meaningfully reduce support volume. But they need a structured knowledge base to draw from. If the product documentation is scattered across PDFs, wikis, and email attachments, the AI will give inconsistent answers.

Configuration and quoting assistance. In complex product environments, AI can help reps select the right configuration by analyzing the customer’s requirements against the product catalog. But the product rules, compatibility constraints, and pricing logic need to be codified in the system first. AI doesn’t invent product rules. It applies them faster.

In each case, the AI is amplifying a foundation that already works. It’s not creating the process or cleaning the data. That’s still human work.

The readiness checklist

If you’re evaluating whether your organization is ready to use AI in revenue operations, here’s what I’d check.

Do you have a documented, consistently followed process for the area where you want to apply AI? If the process varies by rep, region, or mood, AI will learn the inconsistency and reproduce it.

Is the data in your CRM complete and accurate enough to inform a model? Pull a sample. Look at field completion rates for the records that matter. If key fields are empty on 40% of records, the AI is working with a partial picture.

Can you articulate the specific outcome you want AI to improve? Not “use AI to be more efficient.” Something like “reduce time from lead creation to first contact from 48 hours to under 2 hours.” If you can’t define the target, you can’t measure whether AI helped.

Do you have a baseline measurement? What’s the current performance on the metric you want to improve? Without a baseline, you’ll never know if the AI made a difference.

Is there a human who will own the AI’s output? Someone needs to monitor the recommendations, catch the errors, and tune the model as the business changes. If nobody owns it, the AI degrades quietly until someone notices the outputs don’t make sense anymore.

If the answer to most of these is no, the highest-value investment isn’t AI. It’s the foundation. Fix the process. Clean the data. Define the metrics. Then add AI.

The parallel to every technology wave

I’ve watched this pattern play out three times.

CRM in the 2010s: companies bought Salesforce expecting it to fix their sales process. It didn’t. It reflected whatever process — or lack of process — already existed. The companies that got value from CRM were the ones that redesigned their process first.

Marketing automation: companies bought HubSpot or Marketo expecting it to generate leads. It didn’t. It automated whatever lead generation approach already existed. The companies that got value defined their ICP and designed their nurture sequences before turning on the automation.

CPQ: companies bought Salesforce CPQ expecting it to fix quoting. It didn’t. It automated whatever pricing logic and product catalog already existed. The companies that got value designed their pricing waterfall and structured their product catalog before implementation.

AI is the same pattern at a higher level. The technology is more powerful. The potential is greater. But the dependency on the foundation is identical. AI on a solid foundation is transformative. AI on a broken foundation is expensive noise.

The companies that will get the most from AI aren’t the ones that adopt it fastest. They’re the ones that spent the last few years building the processes, cleaning the data, and defining the metrics that give AI something meaningful to work with.

The foundation isn’t glamorous. But it was already the right investment before AI arrived. AI just made the return on that investment much larger.