Nine months ago, I was handed a challenge that probably sounds familiar to many Revenue Operations Alliance members: consolidate our various operational functions into a unified RevOps team and figure out how to use AI to optimize our go-to-market motion.

Simple charter, right? Just "use AI to optimize GTM." That's about as specific as telling someone to "use technology to make things better."

Here's what I've learned since then, and more importantly, what actually worked (and what spectacularly didn't).

The art of making sense of chaos

When I took on the horizontal responsibility for all our go-to-market tech — everything from marketing ops to sales ops to PS ops — I inherited what can only be described as a beautiful disaster.

Picture this: we're an $800 million ARR company with 190 sales reps, acquiring five to ten companies per year. Each acquisition brings its own tech stack, its own processes, its own version of "the right way" to do things.

The first slide I was shown explaining our tech stack looked like someone had thrown spaghetti at a wall and called it architecture. Logos everywhere. Lines connecting things that maybe shouldn't be connected. Systems talking to systems in ways that made my head spin.

So we did what any reasonable person would do when faced with incomprehensible complexity. We stopped. We took a breath. And we stole some wisdom from those who came before us.

Building a framework that actually makes sense

We spent two months — yes, two full months — evaluating every significant vendor in our stack. For a company our size, that meant looking at anything over $50k annual spend.

We mapped costs, capabilities, and overlap. We traced the buyer journey from our perspective, marking each touchpoint as green, yellow, or red. Good, needs work, or needs replacement.

The exercise revealed some uncomfortable truths. We had four different AI tools that could write emails. Five different tools for reverse IP lookup. And vendors kept releasing new features that put them in competition with other tools we already owned. Clay, for instance, introduced three or four new capabilities in six months that suddenly overlapped with existing vendors.

But here's where it got interesting. When we reorganized all those logos into a taxonomy our executive team could actually understand, patterns emerged. Our demand generation stack was complex, with lots of specialized tools for specific funnel stages. Makes sense. It's a complex problem.

Our sales tech stack, though, was much simpler. Broader product categories and fewer overlaps.

This clarity was worth its weight in gold. It showed us where we were doubling up on capabilities and burning money. More importantly, it gave us a foundation for making strategic decisions about our future state.

From chaos to strategy: our four priorities

After mapping our current state across five different buyer journey stages, we emerged with four clear priorities:

  1. CRM hygiene and database building - We chose Clay for this
  2. Predictive scoring and smart workspaces - MadKudu became our weapon of choice
  3. Conversation workflows - Momentum won this battle
  4. Web personalization - Optimizely got the nod

Each choice came from careful evaluation of not just what the tool could do today, but three critical factors:

  • Is it best in class?
  • How good is our current implementation?
  • And is it ready for an AI future?

The ROI conversation that changes everything

Here's where things got real. Getting executive alignment on priorities was the easy part. But getting budget approval was when the gloves came off.

The market isn't exactly booming right now. Boards are asking hard questions about returns. Every investment needs justification. So we learned to frame everything — and I mean everything — in terms of ROI.

We started with expected bookings relative to cost, aiming for 5x ROI. But that got messy fast. Some initiatives drove pipeline, others improved conversion, others increased traffic. So we settled on a universal metric: cost per dollar booked. Every decision, every investment, every experiment gets evaluated on how it impacts that number.

Sometimes the improvements are tiny. A cent here, two cents there. But stack enough of those together and suddenly you're looking at meaningful impact.