Beyond the Snapshot: Chris Selland on What Market Intelligence Gets Wrong About AI

Q1. You’ve spent over 30 years watching technology markets form, mature, and get disrupted — from enterprise software to data infrastructure to AI. What made this moment feel different enough to start something new rather than continue advising within existing structures? And what specifically about the current AI landscape did you find was not being adequately examined by traditional analyst firms?

A: My time in the traditional analyst industry was the last ‘90’s into the mid ‘00’s – so it’s definitely been a while. I had a wonderful 6 years at Yankee Group learning from my good friend and industry legend Howard Anderson, and a few years later I started Reservoir Partners which was ultimately acquired by Aberdeen Group, prior to their acquisition by Harte-Hanks. In the 20 years since I’ve also worked with most of the major firms, a few times quite successfully. For instance, when we successfully launched the HAVeN Big Data platform at HP in 2013, Gartner was deeply influential in our strategy. 

I believe the traditional firms still have their value proposition intact – around helping buyers make buying decisions within categories – ERP, CRM, HCM, data platforms, etc… I have no desire to create quadrants, waves or any type of vendor selection tools – that ground is well-covered by the existing industry.

But what I don’t see the legacy firms doing is looking at the space between markets. I am fascinated by the idea of “competition for capital” because that view of organizational decision-making isn’t oriented around the category – i.e. “we have to decide which CRM to buy” – but rather it’s oriented around the problem – i.e. “our customer churn is too high and our Net Dollar Retention (NDR) is falling – how do we fix that to drive higher productivity and improve our margins”. The solution to that latter problem was never really about deciding what to buy, but should be about how to approach the problem – and which approaches to invest in. That is not only an IT decision – it is also strategic and financial. In this economic climate, the financial considerations are not just a priority, but are driving the entire agenda.


Q2. Differential Factor is built around what you call “Living & Causal Models” — a deliberate departure from the static research report that has defined the analyst industry for decades. That’s a significant architectural bet on how market intelligence should work. What is the fundamental limitation of the snapshot model that convinced you it needs to be replaced, and what does it actually mean in practice to make a research model “live” in a market moving at this velocity?

A: I mentioned I don’t want to create waves or quadrants -a more general way to say that is I also have no desire to create static PDFs. In this era, static reports are out of date practically the moment they are published. The idea of Living and Causal Models is not mine – I need to give credit to my Northeastern University colleague Nik Bear Brown for opening my eyes to what’s possible. I joined the Northeastern faculty in 2024, but I actually worked with Nik prior to that – when I was CEO of Squark AI back in ‘22 – ‘23. Nik is a brilliant theorist and practitioner and he’s got a team of grad students who do spectacular work envisioning, training and building algorithms and models. 

The fundamental idea is that Living & Causal Models are dynamic frameworks that can simulate real-world interventions and “What If” scenarios. By mapping the “why” behind business mechanics, they provide a real-time system of work for proactive strategic decision-making. So it’s well-aligned with the Competition for Capital idea – it’s about the what-if decisions and counterfactuals that help organizations evaluate and make decisions most effectively – and adapt them as the market changes. Differential Factor as a channel for delivering these frameworks, providing buyers with a system for ongoing decision-making that moves beyond the relatively simple question of ‘whose software to buy.’


Q3. You frame your work around “Intersection Analysis” — the collision points where adjacent markets meet — and describe this as where disruption becomes visible before it becomes consensus. From where you sit right now, which intersections in the AI ecosystem concern you most, not as opportunities but as genuine systemic risks that the market is pricing incorrectly or ignoring entirely?

A: When I look through the filter of Intersection Analysis—the collision points where disruption becomes visible—three major systemic risks jump out that I believe the market is currently mispricing or ignoring entirely. The first is the rapid compression of business cycles due to AI-accelerated knowledge work. This dramatic speed-up threatens to upend entire business models and impact GDP in ways we don’t yet understand how to model. 

I’m of course watching the disruption of established SaaS business models – and I’m certainly not the only one with this concern, as most of the investment community is very focused on the “SaaSpocalypse” right now. While the SaaS industry has historically anchored in the traditional “per seat / per month”/ARR framework, the more forward-thinking providers have already pivoted toward consumption-based models. There’s still a lot of counterproductive guidance out there resisting this structural transition, frequently using the “unpredictability” of costs as a defensive argument. I see it differently: if an organization prioritizes price stability over productivity, they are solving for the wrong variable.

Regardless of pricing, legacy models assume a certain human pace of work, but when that work becomes agentic, the value metric shifts, potentially collapsing recurring revenue streams. This is why understanding where a buyer stands on the critical Build vs. Buy decision is so vital right now; we’re giving our application at bvb.differentialfactor.com to buyers – for free – specifically to gain insight into buyers’ decision-making process and gather real data on these systemic shifts.

Finally, the intersection of AI and cloud services is creating lock-in risks associated with cloud verticalization, as major cloud providers build deeply proprietary ecosystems around foundational models and data infrastructure, making it extremely expensive for enterprises to move. That’s a solvable problem over the long run – one interesting dynamic to watch will be large organizations who will start taking some of their infrastructure back from the cloud providers. The “on premise” data center is far from dead, and the BvB calculus is likely to revive many of those efforts.


Q4. One of the harder problems in covering AI markets is separating genuine capability shifts from what is essentially well-funded narrative. You’ve been a practitioner and an analyst, you’ve seen hype cycles up close in enterprise software and data, and you’re now studying AI from the outside. What does your pattern recognition tell you about where we are in this cycle — and what signals would cause you to revise that assessment in either direction?

A: When I look at this cycle, my pattern recognition tells me we are firmly in a rapid, non-linear phase, which is why I’m so optimistic about the long-term, but it also creates short-term distortions that need to be understood. One of the biggest distortions is price: buyers need to be acutely aware of how heavily AI capabilities are being subsidized right now. I wouldn’t necessarily call it predatory pricing, but it is certainly a stage where investors are heavily subsidizing customer acquisition, driving the cost of intelligence way down. This is the classic low-end foothold of Disruptive Innovation in the Christensen sense—a topic I teach at Northeastern—where a “good enough” technology, like ‘vibe coding’ – i.e. AI-assisted rapid prototyping – is initially cheaper but less perfect than the incumbent. Buyers need to internalize that the current, near-free token costs don’t realistically reflect the underlying costs to deliver them, and that subsidy will eventually expire. 

Now, the arguments against vibe-coding are essentially that there are massive costs and risks to ripping out mission-critical systems of record like Salesforce, SAP, and others, and that is absolutely true. However there are also thousands of less mission-critical SaaS solutions where the risks and costs are lower, and that’s where the near-term impact of the vibe-coding alternative—and other approaches—is already taking hold. Many of those point solutions were started and funded in the ZIRP (Zero Interest Rate Policy) period of ’20 – ’22, and those me-too SaaS products are experiencing serious buyer pressure.

The signal that would revise my assessment in the positive direction is the accelerating shift from simply using AI tools to fundamentally rewriting enterprise operating models around agentic systems, as that structural shift confirms the disruption is permanent. We’re just getting started, but that’s the path we’re clearly on.


Q5. Traditional market research was built around serving buyers — helping enterprises make purchasing decisions. But the “competition for capital” framing you use suggests a different primary audience and a different set of questions. Who most needs genuinely rigorous, causally grounded AI market intelligence right now, what decisions are they getting wrong without it, and what would it take for the research ecosystem to actually serve those needs rather than just describe the market after the fact?

A: I wouldn’t necessarily say the buyers are different, but the buying teams are much more diverse and balanced now, involving not just IT, but also business executives and, crucially, the finance organization—and they are all using AI themselves. The goal at Differential Factor is to move beyond simply listing vendors, which legacy analysts do a perfectly fine job of. 

My and our goal is to show them the full spectrum of approaches they should consider, along with the implications of each. Our research focuses on critical capital and organizational considerations, examining variables like the financial architecture of the investment: while SaaS typically functions as an operating expense, internal engineering efforts can offer the opportunity for capitalization. This structural distinction carries massive fiscal weight and must be central to any rigorous strategic calculus.

Qx. Anything else you wish to add?

A: One of the main reasons I started Differential Factor was to begin with a “clean sheet of paper” approach to research. It’s still very early and I want to stay open-minded to others’ ideas, and partner where it makes sense – including some conversations I’m having with more traditional analysts. It’s also critical to stay open-minded in a time of such massive change. One of the things I most love about teaching at Northeastern is I get feedback from my students and colleagues every day. I certainly invite your readers to drop me a note  anytime—and check out the Build vs Buy app at bvb.differentialfactor.com—it’s our first but not our only—and let me know what you think.

If you haven’t read the Innovator’s Dilemma in a while, this is a great time to pick it back up because Disruptive Innovation is happening – right now – in the world of enterprise IT. Clayton Christensen was such a visionary, and I’ve had the opportunity to work with the Christensen Institute who is doing a wonderful job carrying on his legacy. AI is the catalyst for massively disruptive change and we’re only seeing the very start of it. It’s a fascinating time to be alive, and it’s incredibly energizing being in the middle of it.

………………………………………………………….

Chris Selland is the Founder & CEO of Differential Factor, an AI-native research firm focused on applying “Living & Causal Models” and “Intersection Analysis” to the rapidly changing AI ecosystem.

He is also a Lecturer in Entrepreneurship & Innovation at Northeastern University’s D’Amore-McKim School of Business (DMSB), where he teaches on Industry Disruption and Corporate Innovation. With over 30 years of experience as an operator, executive, and analyst, his background includes multiple C-level roles, including CEO and GTM leadership, across data, analytics, FinTech, and AI. Earlier in his career he was an analyst at the Yankee Group and founded Reservoir Partners which was later acquired by Aberdeen Group. He holds a B.S. from Cornell University and an M.B.A. from NYU Stern School of Business.

You may also like...