A quarterly business review. The VP of Marketing presents pipeline numbers. Healthy growth. Good conversion from campaign to opportunity. The slide looks clean.
Then the VP of Sales speaks. “I don’t recognize these numbers. Half of what marketing calls pipeline, my team wouldn’t touch.” The room shifts. Finance asks which number is right. Nobody can answer, because both are technically correct. They’re just not measuring the same thing.
I’ve sat in this meeting more than once. Different companies, different industries, same conversation. And every time, the instinct is the same: “We need better dashboards.” A new report. A different BI tool. More data.
But the dashboards aren’t the problem. The problem is that when marketing says “qualified lead,” they mean something different than when sales says it. Everyone is reporting accurately against their own definitions. The numbers just don’t add up across the table.
Same words, different meanings
The most dangerous terms in revenue operations are the ones everyone thinks they agree on. MQL. SQL. Active customer. Churn. Expansion. Pipeline.
These words show up in every dashboard, every board deck, every weekly standup. And almost nobody stops to ask: what exactly do we mean by this?
Take “qualified lead.” In most organizations I’ve worked with, marketing defines it as a lead that hits a scoring threshold — right industry, right company size, downloaded two whitepapers, attended a webinar. That’s an MQL.
Sales looks at the same lead and sees someone who hasn’t expressed any buying intent. No budget conversation. No timeline. No identified pain. To sales, that lead isn’t qualified. It’s a name.
Both teams are right by their own standards. But when marketing reports 200 MQLs and sales reports 30 SQLs, leadership sees a 15% conversion rate and asks what’s wrong. The real answer is that nobody agreed on what “qualified” means across the boundary.
This happens everywhere, not just in leads
“Active customer” is a good example. For CS, an active customer might be one who logged in this month. For finance, it’s one who has a current contract. For sales, it’s one who’s responsive to outreach. So when someone asks “how many active customers do we have,” three teams give three numbers. All defensible. None comparable.
“Churn” is worse. Is it logo churn or revenue churn? Gross or net? Does a downgrade count? What about a customer who pauses for a quarter and comes back? I’ve seen organizations where CS reports 5% churn, finance reports 12%, and the board gets a blended number that represents neither.
The metric alignment problem is what makes a full-lifecycle view of revenue either useful or decorative. Without shared definitions, you can’t calculate a conversion rate from one stage to the next if marketing defines the entry point differently than sales defines the exit point of the same stage.
Why definitions drift in the first place
This isn’t malice. Nobody deliberately defines things in a way that makes their numbers look better. But incentives create exactly that outcome.
Marketing is measured on pipeline generation. So they define “qualified” in a way that maximizes the count of leads that pass the bar. A generous scoring model, a low threshold. All reasonable design choices in isolation. All optimized to make marketing’s number look healthy.
Sales is measured on closed revenue. So they define “qualified” as something much closer to “ready to buy.” Also reasonable. Also optimized for their own metric.
CS is measured on retention. So “active” means engaged. Finance is measured on recognized revenue. So “active” means contracted.
The drift happens because each team owns their own definitions, and nobody owns the definitions across teams. There’s no single person or function responsible for saying: “When we use this word in a report, this is what it means, everywhere, for everyone.”
What a shared metric vocabulary actually looks like
Every metric that crosses a team boundary needs a written definition that specifies exactly what is counted, how, and when. Not “MQL = qualified lead from marketing.” That’s a label. An operational definition sounds more like: “MQL = a lead with a fit score above 60 and a behavior score above 40, where fit score is calculated from industry, company size, and role match, and behavior score is calculated from website visits, content downloads, and event attendance within the last 90 days.”
That’s specific enough that two people can independently look at the same lead and arrive at the same classification. That’s the test. If your definition doesn’t pass that test, it’s not a definition yet.
The definition also has to live in the CRM or data model, not in a slide deck. If “MQL” is defined by a score threshold, that threshold needs to be a field value, with the scoring logic documented and visible. If “active customer” means “has logged in within 30 days,” that login data needs to flow into the system where the metric is calculated.
Metric definitions also need an owner. Not each team owning their own, but one function owning the cross-team vocabulary. Revenue operations is the natural home for this. The point is that someone has the authority to say “this is what this term means” and the responsibility to update it when the business changes.
And definitions aren’t permanent. The scoring model that made sense when you had 50 customers might not work at 500. Quarterly reviews of metric definitions, tied to the business review cadence, keep the vocabulary current.
The conversation that should happen before the dashboard
When a leadership team asks for a new dashboard, the first question shouldn’t be “what do you want to see?” It should be “what do these terms mean to you?”
Build a simple list. Ten or fifteen terms that show up in your reporting: MQL, SQL, opportunity, pipeline, active customer, churn, expansion, NRR, win rate, sales cycle. Go around the room and ask each function to define them. Write them on a whiteboard.
You will find disagreements. That’s the point. Those disagreements are currently invisible, buried inside dashboards that look precise but aren’t comparable. Making them visible is the first step to fixing them.
The output of that conversation is a metric dictionary. A single document, maintained by rev ops, that contains the operational definition of every term used in cross-functional reporting. It links each definition to the system field or calculation that produces it. It has a version number and a review date.
This isn’t exciting work. Nobody gets promoted for writing a metric dictionary. But the organizations where reporting actually drives decisions are the ones where this boring work got done.
In closing
Every reporting complaint I’ve encountered in ten years of CRM work can be traced back to one of two root causes. Either the data isn’t there, or the definitions aren’t aligned. The first one is a data problem. The second one is a conversation that never happened.
The instinct to fix reporting by building better reports is understandable. But if the people reading the report don’t agree on what the words mean, a prettier version of the same confusion isn’t progress.
The fix is slower and less satisfying. Sit in a room. Define your terms. Write them down. Put them in the system. Assign an owner. Review them quarterly.
Then build the dashboard.