I'll start with a question that probably gets an eye roll in a room full of operators: Is AI worth the investment?

In 2026, sitting with a crowd of RevOps professionals, it's almost rhetorical.

We've been living and breathing AI for years now but step outside this bubble and you'll find a very different mood. There's a simmering skepticism in the C-suite, and honestly, it's earned.

The hype era of AI is over. We're now firmly in the reality phase, and our CFOs don't want to hear about magical transformation anymore (even if, like me, you have the word "transformation" in your title).

They want to know why every pilot, every seat, and every API call directly correlate to productivity and revenue impact. If it isn't measurable, it gets cut. That's the new normal.

The MIT stat everyone's quoting (and misreading)

You've probably seen the MIT statistic floating around: 95% of generative AI pilots are failing. When that number first dropped, we had customers at Gong start questioning their entire AI investment.

My colleague Craig Hanson wrote a piece recently arguing the stat is a little misleading, and I'll come back to that later.

For now, let's rewind a bit.

Back in November 2022, OpenAI released ChatGPT and suddenly AI was everywhere in our workplaces. We moved from leaders asking "What's a large language model?" to "we need an AI strategy" in about six weeks.

Many companies spun up AI councils and task forces (show of hands if you sat on one), and we started buying every point solution we could find. AI for call summaries. AI for deck creation. AI for SDR avatars. AI for email writing. AI for enrichment. The stack just kept growing.

AI in Buyer Behavior in 2026: Is it Here to Stay?
AI has proven itself as a whole new approach to marketing operations that is changing how brands attract, engage, and convert their audiences.

It felt good because it felt fast. We were making visible progress, and when it came time for ELT or board meetings, we could report back on all these POCs and proud adoption numbers.

Today, that's shifted. Leaders are looking at their tech stacks and realizing these AI systems don't talk to each other.

MCP will help eventually, but right now they're trying to align business problems and OKRs directly with ROI, revenue impact, and productivity. And a lot of leftover shelfware from the pilot phase is still sitting there. We've all been handed a consolidation mandate.

The pace of change is outrunning the discipline

The velocity of change over the last ten years is roughly tenfold what it was in the twenty years prior. And the pace of change over the last 24 months (honestly, probably the last 90 days) has completely reshaped how we approach our roles.

RevOps used to be about managing processes, initiatives, CRM, and the kind of work a typical sales ops person did pre-2017. Now we're managing an algorithmic shift in how work itself gets done.

It's a pivotal moment for revenue operations. We're still a fairly young discipline, with the rebrand happening around 2017 to 2018. I've lived this from the operator's seat.

I spent eight years building out the RevOps and enablement function at Hearst then I pivoted to an AI-first digital marketing agency, one of the largest independent ad agencies in the US.

Before I joined Gong, I was deeply embedded in the platform as a user, as the person evaluating and buying it, and as the person who had to prove out its value every single day.

So I know the feeling. You wake up on a Monday, your CRO has new questions, and somehow none of your existing reports in Hockey Stack answer them. You're spinning up new dashboards for a new ELT meeting before your coffee even cools.

I know the white-knuckle feeling of Friday afternoon board prep dropped in your lap with 20 minutes notice, where you feel like you're just plugging holes.

Whether you're a one-person RevOps function or leading RevOps at a global enterprise, the struggle is real. Our plates are always overflowing. Compensation planning, territory management, CRM hygiene, and tool governance, it all rolls downhill to us. And now we've all been handed AI on top of that.

For most of us, AI isn't a domain we were born into.

Unless you've spent the last decade in data science, learning and executing a sustainable AI strategy is an added burden on top of what already feels like an impossible, reactive job.

Why productivity is the only story leadership wants to hear

Earlier this year, I helped a client with their SKO, and we focused on the state of revenue AI. What struck me was the massive shift in focus.

In years past, when leaders got asked how they'd grow the business, the typical answers were M&A, market expansion, new product lines, and internal scaling. For the first time, that answer has changed.

They're looking to grow through productivity and efficiency. The stat I keep seeing is that roughly 98% of earnings calls this year have productivity as the headline mandate.

That brings me to one of the biggest mistakes I see ops teams making right now. They try to justify AI spend with one of the weakest metrics in the book: time saved.

"This tool will save our reps two hours of admin work a week." It sounds great on a slide. It sounds great when you're pitching internally. But your executives don't fund time. They fund outcomes.

As operators, it falls on us to prove that those two hours of reclaimed capacity actually get reinvested into high-value activity that leads to more pipeline, shorter deal cycles, higher win rates, or whatever the outcome is. It's about demonstrating impact, not just justifying spend.

To move past the skepticism, RevOps has to show leadership how to evaluate AI as a system of action. I know that phrase is a buzzword at this point, but the question underneath it matters: is AI actually changing a decision or behavior in a way that's sustainable 24 months from now?

That means operationalizing adoption. Moving past vanity metrics and embedding AI so deeply in the operating rhythm that teams have no choice but to adopt it. And it means quantifying reinvestment. Showing exactly where the saved time went and how it produced business value.

A Practical Guide to Sales Metrics for Productivity
Tracking the wrong metrics kills productivity. Here’s a guide to measuring the things that’ll actually make your reps more productive.

Running RevOps like a product team

One of the ways leaders have been bridging this gap is by running the RevOps function like a product organization. I'll oversimplify, but product teams don't just build something and ship it with a Slack message 24 hours later. They have roadmaps. They run alphas and betas. They build internal confidence before something goes GA.

That confidence is an overlooked organizational multiplier. I've experienced this firsthand. There's a meaningful difference between a seller who has been trained on something and a seller who genuinely feels confident adopting it sustainably.

RevOps teams, often in close partnership with enablement (some of us wear both hats), need to create confidence-building material alongside the roadmap. That means release notes with clear communication about what's changing and why.

No seller wants a Slack message in the morning saying there's new validation built into Salesforce and now they'll hit a roadblock when they try to fill out a field because you spun it up overnight. Back into the roadmap. Communicate clearly.

It means how-to documentation that covers not just where to click but how to win. This is where orchestration and next-best-action really matter. And it means anticipatory guidance. Simple talk tracks so that when AI gives a seller a signal, they know exactly what to say to that specific persona or customer.

When ops teams start operating like product teams, adoption becomes unavoidable because the value is too high to ignore. The solution becomes indispensable.

Architects, not engineers

Here's a sensitive one. As a natural extension of the product-team mindset, there's been a lot of buzz about GTM engineers. I appreciate the sentiment, but I want to offer a reframe.

At Gong, we think the better framing is architect.

Outside of revenue organizations, engineers traditionally do one very specific job. They build to spec. Architects are more macro. We look at blueprints. We see how layers fit together. We ask whether the decision we make today will still be sustainable five years from now, whether the house will still be standing.

If we lean too hard into an engineering mindset, RevOps risks becoming an ivory-tower shop for building systems. An architect mindset keeps us grounded as a business function, not just a tech function.

Architects do things differently. We architect predictable, org-wide execution and change, not just tools. We stay close to the field, sitting on calls (or in Gong's case, listening to calls) to hear the real language of risk. I have a genuine fear of being too far removed from what sellers and GTM teams actually do day to day. We partner deeply with enablement so that roadmaps align.

When we use AI, we're turning leading indicators (what happened, which is most traditional RevOps reporting) into next-best actions (what should happen now).

The highest-performing organizations this year don't need more dashboards or more reports. They need the architect mindset baked into the RevOps function.

Why those pilots keep failing

Let's come back to that MIT stat. Why are 95% of generative AI pilots failing?

Often it's because organizations don't like change management. There's a well-known dynamic where the pain of the status quo feels safer than the burden of imagining and executing something new. We don't want to rock the boat, so we isolate AI into tiny pilots with small groups in niche workflows.

We think we're being safe. It's actually counterintuitive. Those small pilots rarely give a realistic assessment of whether a solution will hold up across the entire organization. Going wider is usually more informative.

A successful rollout tends to move through a few phases.

  • The first phase is speed. How quickly can a team complete a given task? This phase is quantitative and fairly objective. Faster is easy to measure.
  • The second is effectiveness. How well are those tasks being performed? This is where better comes in, and where your organization can define what good actually looks like. As leaders, we can't pull benchmarks out of a hat, and we shouldn't blindly adhere to decade-old standards like "everyone needs 3x pipeline." You have to back into what your data actually supports.
  • The third, and in my view the make-or-break phase, is operating rhythm. This is where whatever you've built has to become so embedded in daily operations that it becomes unavoidable. It shows up in every client-facing area. Leadership has to lean in and be relentless about consistency across the org.

Most companies stall between the first two phases. They get a bit of efficiency, they lack visibility into effectiveness, they get afraid to scale, and they stay stuck in small pilot groups.

Consolidation as a competitive advantage

As we look ahead, the theme you'll keep hearing is consolidation. And consolidation is more than cost-cutting. It's a competitive advantage.

A Gartner analyst put it well earlier this year, saying that because AI is in the trough of disillusionment throughout 2026, it will most often be sold to enterprises by their incumbent software providers rather than bought as part of a new moonshot project. He also said the improved predictability of ROI must occur before AI can truly be scaled across the enterprise.

What does that mean for us? RevOps is moving away from fragmented point-solution pilots and toward tech stack consolidation, because ROI only gets realized when data is unified.

Vendor sprawl costs more than money. It fractures accountability. Ask your sellers how many tabs they typically have open on a Monday and it gets scary. When your data lives in five different places, nobody really knows what the source of truth is.

Many of us are also stuck in the CRM-as-nucleus mindset. A lot of legacy enterprise organizations are golden-handcuffed to their CRM data. The CRM often becomes the ceiling of execution rather than the floor, and it can easily turn into a graveyard for manual data entry.

The bigger shift in platform consolidation is toward a revenue-specific operating system across go-to-market. A system where the goal goes beyond storing and collecting data and into helping sellers understand and orchestrate their next best action. When that exists, leaders reclaim time that was being spent reconciling the truth across five different systems, and they reinvest it in higher-value work.

This is why purpose-built AI matters so much. We've all seen someone drop a call transcript into a generic LLM and ask for a summary. That's not a strategy. It's not systematic. It's not safe. It's offloading a manual task. And plenty of sellers are still doing exactly that.

When we treat RevOps as the nucleus, we owe our organizations the most innovative purpose-built solutions we can find. Solutions reps feel they can't live without.

Coming back to the MIT study one last time, Craig wrote something in his article that opened a lot of eyes. Don't be fooled by the 95% stat. It's not as scary as it sounds. Pay attention to the overlooked number: when AI projects use purpose-built solutions, they're working about two-thirds of the time.

Here's the challenge I'll leave you with.

When you get back to your office on Monday, list your entire tech stack and ask yourself a simple thing. If this platform went away tomorrow, who would come calling you on Slack or Teams to complain? If the answer is nobody, you've got shelfware. If the answer is every rep on the floor would be chanting to bring it back, you've got an operating system.

We owe it to our organizations to be the nucleus and bring forward a tech stack that's scalable, efficient, and loaded with value.

Three things to take with you

To wrap up, three takeaways.

  • Stop counting hours. Start focusing on outcomes. If you can't show where the saved time got reinvested, leadership won't buy it.
  • Adopt the architect mindset. Design sustainable execution, not just systems. Stay close to the frontline and avoid the ivory tower.
  • Build a system of action, not just a system of record. Purpose-built AI helps create an operating system your teams genuinely can't live without.

RevOps went through its rebrand in 2017 and 2018, so we're still a young discipline. We need to take a long-term view and define what this function looks like ten years from now. Future-proofing is a cliché at this point, but it matters when you're thinking about long-term RevOps planning.

Think about electricity for a second. One of the most revolutionary developments in human history, and oil lanterns still coexisted with it for decades. People didn't flip a switch (pun intended) and change the world overnight.

AI is our electricity. There's a long curve of infrastructure, education, and standardization ahead before ROI is fully realized. We know it works. The real question is whether you feel like an architect. Are you architecting the workflows that will be embedded into the operating rhythm of the organization you support?

That's the work.